From Heisenberg’s uncertainty principle to the discovery of gravitational waves, and from Hawking radiation to the discovery of the backward travel of every quantum particle, none, perhaps, is more likely to shape our future as significantly and as decisively as the world of Artificial Intelligence (AI). Is artificial intelligence one of the finest works of a human brain? That’s debatable, but whether you’re fond of the technology or not, one thing is for sure: AI is here to stay. And, the question that might haunt us in the days to come is: Will the creation pave the way for the destruction of its own creator? Well, nuclear security and artificial intelligence experts at RAND might have answered this in detail in a report published recently. The report emphasises the fact that the phenomenon of ‘nuclear deterrence’ can now find itself in a precarious situation. If the probability of a mushroom cloud hanging over the world remains low at this point, then the chances of a catastrophe are only going to get increased as the AI grows stronger. The experts argue, “It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.” For instance, with better, smarter and intuitive AI agents, the nuclear-armed countries can be lured into thinking their nuclear command and control is getting increasingly vulnerable to threats. Hence, evoking a reckless decision. Also, the possibility that the launch orders be given based on inaccurate data or simply a miscalculation by an AI agent can’t be ignored, either. After all, AI is only fed with man-made data. The report further explains how AI is likely to make matters worse in terms of a nuclear catastrophe. Consider this: North Korea has tunnels to place and position its Inter-Continental Ballistic Missiles (ICBMs) in case of a contemplated strike. In case of such a launch, the United States will have less than 15 minutes to react. But, here’s where the situation gets murky. With the application of AI, the United States will be able to better predict the launch sites in North Korea, who will be left with two options to play with. Either come up with more launch sites in the hope that some of them won’t get caught by the AI agents in the US, or try developing a more sophisticated nuclear arsenal that is, presumably, hard to detect. If you’re already concerned about the future of nuclear warfare, there are all the reasons for you to be even more circumspect going into the future. If a country is going to rely more on AI, particularly in terms of its nuclear strategy, then the nuclear environment can be subverted in many ways. From hacking to input manipulation to training data hacks, the report has talked in detail about how the AI-operated nuclear safety and control procedures can be subverted. Another aspect that needs consideration is the fact that AI used in nuclear command and control is going to be based on hypothetical data and wargaming. Artificial intelligence might not be intelligent enough to ensure nuclear devastation wouldn’t ever enter the realm of reality Nuclear weapons haven’t been used since 1945, so there can’t be any realistic and accurate data to feed the AI with. Hence, there’s too much conjecture, and if the nuclear-armed states were to rely heavily on this, then ‘false alarms’ would surely be coming. Although the report goes on to enunciate some of the factors that might enhance strategic stability through the use of AI, the thought of a data-driven AI getting it wrong in terms of the launch orders is frightening, to say the least. And, it has to err once only, and if it does, we wouldn’t be able to witness AI make the wrong call the second time around. Thanks to the ‘meagre’ number of nuclear weapons most countries possess. And, if the creation does become instrumental in tormenting its own creator, will the inhabitants of Pakistan and India be the first ones to experience the doomsday? Considering the eagerness to play with the bomb on both sides of the LOC, you just can’t rule that possibility out. Artificial intelligence, perhaps, isn’t intelligent enough to ensure nuclear devastation wouldn’t ever enter the realm of reality. Here’s the plain fact: with or without AI, it’s only a glitch that can annihilate us all. The writer is a counterterrorism and security analyst Published in Daily Times, May 4th 2018.