On the scent of AI: From sentient AI, killer AI to foundation models | OPINION

By Dr. Jaijit Bhattacharya

About a month ago, a Google engineer stumbled upon what he referred to as an AI that has acquired a soul or in other words, a sentient AI, i.e. an AI that has a view of self, a consciousness and an identity as an individual. This should have been screaming headline news.

AI refers to Artificial Intelligence, which is capabilities given to machines so that they can mimic human thinking capabilities or even surpass them in certain given areas. These capabilities arise from the algorithms that are coded, and the data used to train such algorithms.

After all these replies, why didn’t the sentient AI from Google grab top headlines globally, and instead showed up on the AI freakshow columns? That is because the particular AI, which was a chatbot, was trained to behave as if it was sentient. So how do we know that it is not really sentient? If it talks in a manner that sounds like it has a soul, it feels joy, it feels pain, it feels sad and it is like any other human; how do we know that is not sentient? Well, I guess that is a debate best left to the philosophers.

All we know is that Google designed the chatbot in that manner. And what happened to the religiously predisposed Google engineer who went running down the streets screaming Sentient AI, Sentient AI, perhaps mimicking the eureka screams of Archimedes? Well, his sentience stand was challenged based on a mundane earthly clause in his employment agreement that forbade him from running down the street, exposing confidential company information.

But this has also sparked a debate about whether sci-fi movies where sentient AI robots turn into killers can come true. Unfortunately, even if AI may not turn sentient any time soon, they have definitely become killers. AI single-handedly killed over 345 humans in five months between Oct 2018 and March 2019. This was the AI built-in into the sparsely tested Boeing 737 MAX 8 which was hurriedly rolled out to compete with the Airbus A320neo. It led to the crash of two aircraft, one operated by the Indonesian airline, Lion Air, and the other by Ethiopian Airlines. In fact, the ill-fated Lion Air airline was being operated by an experienced Indian pilot. The AI of the aircraft prevented the very capable pilots of the two aircraft from being allowed to manoeuvre the aircraft, as the AI took over and plunged the aircraft into self-destruction, taking with them the lives of over 345 helpless humans. The two crashes were probably the equivalent watershed moment of the bombing of Hiroshima and Nagasaki, which declared the advent of a dangerous nuclear arsenal onto planet earth. The two crashes declare the advent of Killer AI into the human world.

The two events of the “discovery” of sentient AI and the existence of killer AI, bring forth an interesting legal predicament. If an AI kills, can it be a valid legal argument that the creator of that killer AI should be absolved of any charges because of the possibility that the AI turned sentient and decided on its own to become a killer? This is certainly a script for an AI legal comedy movie, but it does have deep repercussions on the liabilities of an AI, and hence also slides into the issues of ethical AI.

Such issues of AI get significantly compounded with the advent of massively more powerful AI technologies such as the “Foundation Model”. Such technologies accomplish tasks which their creators cannot even imagine. Such tasks, as explained to me by Dr Dakshi Agarwal, could be to draw a picture of randomly mixed statements such as “a tiger in a flying pain” which humans cannot even imagine, and yet the AI based on Foundation Model can actually create such pictures, and the humans nod in agreement with what has been created. Dr Dakshi Agarwal is one of the 30 fellows of IBM ever elected as a fellow in its entire history of over 110 years of existence. So he does know what he is saying. AI is indeed moving into an unimaginably powerful realm of possibilities that would surely be harnessed by humans to create a better world for themselves and the plants and animals all around. And it can also be harnessed to create widespread death and destruction.

We have already seen the existence of killer AIs in our lifetime. We have also lived through the scandal of a sentient AI. It would not be long before we see much of our lives taken over by AI in the form of conversing speakers, robots, autonomous vehicles and perhaps even thinking aides and eventually independent thinkers. Humans and human society need to evolve faster to create the frameworks required for peaceful use of such power and to put in domestic as well as geopolitical regulatory frameworks.