The Threat of Artificial Intelligence Weapons

Author: Saadain Gardezi

All major powers are currently focusing on the development of autonomous weapon systems.

Artificial Intelligence (AI) is a technological breakthrough that would render the world unrecognisable as we know it today. Though the idea that machines would possess human-like cognitive capabilities might have sounded like science fiction in the past century, it has now transitioned to reality.

Since its inception, Artificial Intelligence has drawn attention from a diverse number of fields and the concerned “researches and developments” are moving at a staggering pace. AI-powered smart assistants to advanced training simulations and even self-driving vehicles are a reality now. But the point of concern is that such a powerful and revolutionary technology is not being kept limited to peaceful purposes but also being utilised for military purposes.

The quest for the weaponization of Artificial Intelligence has already begun, which may lead the world towards a new arms race and may be the cause of major armed conflicts in the future. Elon Musk–the founder of SpaceX, PayPal, Tesla and the co-founder of OpenAI–shared his concerns saying, “Competition for AI superiority at the national level (is the) most likely cause of WW3.” While at the same time Russian President Vladimir Putin was quoted as saying “artificial intelligence is the future, not only for Russia but for all humankind … It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

All major powers are currently focusing on the development of autonomous weapon systems, with Russia, China, Israel and the US on the forefront of a covert arms race.

But how exactly would AI be used in warfare?

AI would most importantly be used through Autonomous Weapons. Ones that would push humans out of the decision-making process and machines would be empowered enough to independently search and engage targets based on programmed constraints or descriptions.

Autonomous weapons are being dubbed as the third revolution in warfare after gunpowder and nuclear weapons. They are believed to eventually lead warfare to an algorithmic level. The second revolution brought the world to the brink of World War III in the aftermath of the Cuban Missile Crisis. The third revolution, however, could be even more volatile and uncertain in triggering such an event.

Many autonomous and semi-autonomous weapon systems exist as of today, but most of them are defensive in nature and are operated under human supervision. These include hundreds of missile defence systems and security weapons deployed on the borders and some sensitive areas. Offensive autonomous weapons currently constitute mainly of loitering munitions–suicide drones that carry some explosives and explode upon reaching the target (blurring the line between drones and missiles)–, autonomous guns that can fire upon detection of human activity, and robot fighting vehicles that are armoured tracked vehicles furnished with automated control systems, secure radio channels and surveillance equipment.

The quest for the weaponization of Artificial Intelligence has already begun, which may lead the world towards a new arms race and may be the cause of major armed conflicts in the future

Advocates of the usage of autonomous weapons argue that employing robots and machines in the battlefields and empowering them to conduct warfare would decrease human casualties, enhance reaction times and improve accuracy.

It would make inaccessible areas accessible and could be used to eliminate hostile anti-state targets. Yet, the majority considers that there would be a moral dilemma in the absence of human involvement, as the robots lack human judgement capability and are unaware of the norms and rules of the war. They could lower the thresholds of war and may cause an accidental escalation. They could also easily be used for assassinations, oppressing populations and even for ethnic cleansing or genocide.

Being comparatively less expensive to produce and easy to use, they might even get in the hands of non-state actors, who could use them to inflict massive damages, without leaving a trace of their identity, without even being there physically.

But the concerns do not stop at misuse. In the words of Elon Musk, “Artificial Intelligence could be our biggest existential threat.”

The concern that it might surpass human intelligence, take control of the networked weapon systems and go rogue, pursuing its own devastation agendas, is in fact grave for the entire humanity. Consider a scenario in which an intelligent AI program or weapon system takes control of the entire internet and networked devices; our phones, laptops, military systems, government infrastructure and even IOT devices.

How would the governments and militaries be able to cope with such a situation, where every move we make to defeat it would turn back against us, as the machine would keep learning and improving?

The moral aspect of the usage of AI for military purposes has evoked strong advocacy; demanding a ban on autonomous weapons. The group “Campaign to Stop Killer Robots” was formed in 2013.

An open letter was written in this regard, which has been signed by 4502 AI/Robotics researchers and 26215 others, including pioneers of the field and other notables such as Stuart Russel, the late Stephen Hawking, Elon Musk, Steve Wozniak and Noam Chomsky.

Along with other countries, China was the first permanent member of the UN Security Council to propose a ban on “Lethal Autonomous Weapons Systems (LAWS)” at a meeting of ‘Group of Governmental Experts on Lethal Autonomous Weapons Systems’ in April 2018, which was opposed by many major powers including US, UK and Russia. Moreover, 32 states have shown a desire to negotiate new international law in this regard, 12 have opposed such an effort & 28 countries are calling for a ban on fully autonomous weapons including China and Pakistan. As reported by Reuters, UN Secretary-General Antonio Guterres said, that it is crucial that the world works to avoid “autonomous machines with the power and the capacity to take human lives on their own without human control… This is the kind of thing that in my opinion is not only politically unacceptable, it is morally repugnant and I believe it should be banned by international law.”

The horrors that AI and autonomous weapons pose to humanity are quite horrendous. They could even be more calamitous as compared to Nuclear Weapons and may lead humans towards low threshold wars and even to omnicide. A timely ban would be better than a world full of cheap and disastrous AI weapons. While the importance of AI cannot be denied, it is up to the mankind to use it either to bring peace or to develop killer robots and inexpensive Weapons of Mass Destruction that may be the recipe for a disaster or even an apocalypse.

The writer is a freelance columnist

Share
Leave a Comment

Recent Posts

  • Business

PRGMEA desires separate ‘Apparel Policy’ for share in US future marketa

The Pakistan Readymade Garments Manufacturers and Exporters Association (PRGMEA) calling for the revision of the…

4 hours ago
  • Business

Brazilian envoy urges Pakistan to join BRICS to become part of global economy

Ambassador of Brazil to Pakistan, Olyntho Vieira has urged Pakistan to join international multilateral forum…

4 hours ago
  • Business

Gold price increases to Rs229,500 a tola

Gold price in the country increased by Rs100 per tola on Wednesday following an uptick…

4 hours ago
  • Business

PKR moves up to 278.04 against USD

The Pakistani rupee continued inching up against the US dollar in the inter-bank market on…

4 hours ago
  • Business

PSX stays bullish, gains 641 more points

The 100-index of the Pakistan Stock Exchange (PSX) continued with bullish trend on Wednesday, gaining…

4 hours ago
  • Business

Oil prices dip on demand concerns

Crude oil prices slumped on Wednesday as an unexpected increase in US crude inventories stoked…

4 hours ago