“When I came to you with those calculations we thought we might start a chain reaction that would destroy the entire world, I believe we did”. The above-mentioned quote is from the multiple Oscar-winning film Oppenheimer in which the lead character Dr Robert J Oppenheimer is talking to Einstein about the invention of the nuclear bomb. Even though the quote is about nuclear weapons, some might say that it better describes the AI-powered lethal autonomous weapons systems (LAWS) more accurately in modern times. According to DOD 3000.9, Autonomous Weapons Systems can be defined as: “A weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override the operation of the weapon system, but can select and engage targets without further operator input after activation.” The definition can be deemed as accurate but can also be viewed as a broad interpretation of the Autonomous Weapons Systems (AWS). According to this definition, we can conclude that the first AWS were the pit traps and that land mines are also autonomous weapons systems. It is not the type of weapon systems that come to mind when people think of Autonomous Weapons Systems. In modern times, the AWS are mostly used to describe the AI-controlled unmanned weapon systems. Rapid digitalization has led to these modern autonomous weapons. Since the inception of warfare, humanity has always strived to be one step ahead of its enemy, whether it’s using some new projectile weapons or utilizing fire in a way the enemy gets startled. Centuries of this cycle have led us to the present day where it is no longer hypothetical to fear a hoard of drones aiming at their target and taking the shot with no human operator. Actionable frameworks need to be put in place to make sure that the future of policymaking is not algorithm-driven. There are many benefits of AWS such as precision, flexibility in combat, prevention of loss of lives, and the cost-effectiveness that these systems provide. Another aspect of AWS is their uncomplicated nature in opposition to human-based weapons. An AI would theoretically never defy an order if it’s not programmed to do so. If we were to explain the current autonomy of weapons to some general from the 16th century, he would probably think that humans have made “the perfect soldier”. The disadvantages of AWS include unintended consequences, proliferation to non-state actors and most importantly the ethical considerations of putting the lives of humans in the hands of a machine. LAWS could have long-term effects on the strategic cultures of states, we would be looking towards a world where state relations are shaped by AI. There is an ongoing debate on the international level on whether the use of autonomous weapons systems should be continued or discontinued. If continued, what level of human oversight is necessary? If discontinued, what implications would there be to face? The discussion comes down to the concept of “Keeping a Human in the Loop”. There has yet been no consensus on the status of these weapons and their use in modern warfare. We see this technology being used in the present day in varying capacities, whether it be the Ukrainians intercepting Russian communications, Turkish forces allegedly launching a fully autonomous drone attack on Libya in March 2020, or even the active use of loitering munitions in the Russia-Ukraine and Israel-Palestine conflict. In this ongoing debate, some states are in favour of keeping a human in the loop but others are not so much. The advancement in AI-based LAWS has led to an arms race once again. There is just too much utility for states to ignore. Therefore, we do not see major powers give a directive against it openly even though the UN Secretary-General has tried to find a common ground between them. There is a game of chicken going on between the states which will supposedly determine the victor of this race. The future of LAWS is now at a more critical point than ever, the powers that be need to set their priorities straight on whether to live in a world where a simple error in programming could lead to a world war. Where the meanings of sovereignty, morality, ethics and the international landscape are shaped by the decisions made by machines or take calculated actions while we can control the fallout of this technology. There is a serious need for the international community to agree with the formulation of binding treaties that balance innovation with human oversight. Actionable frameworks need to be put in place to make sure that the future of policymaking is not algorithm-driven and before we know it the negotiations are being done by robots with navy blue suits and bright red ties. Only through foresighted dialogue can the states come to a fruitful and acceptable arrangement as the biggest danger of LAWS is that they are likely to make decisions with no flexibility and their potential benefit is all the same. The writer is a freelancecolumnist.