Three principles are considered to be fundamental to any Artificial Intelligence (AI) enterprise – safety, security, and trust – but this is a story of an AI product where all the three were compromised. The best and most qualified engineers were commissioned to build a state-of-the-art missile having unparalleled traits and unheard-of performance potential. The company has been famous for developing anthropic AI; however, this is a story which was supposed to be an instance of philanthropic AI, but it turned out to be misanthropic AI. This group of experts had delivered several devices in the past also and had many accomplishments to their credit. But this masterpiece was based on generative AI. After going through different stages of pre-training, it was subjected to a period of self-supervised learning as well. In the first test run, it transpired that the system had difficulty interpreting the commands requiring conformity to norms. The system was troubleshooted by placing an additional processor with the mainframe that could interpret the instructions into workable outputs. Later on, a number of such add-on processors were used but these glitches could not be overcome. Another issue that frustrated the engineers was that they failed to teach the software how to tell a friend from a foe. As a result, in many trial launches the missiles targeted their own installations incurring heavy losses to themselves, in terms of lives and infrastructure. The device got entangled in realistic fakes and deepfakes on its own, and no expert could find a cure for the malfunctioning. There was a common understanding that it was sufficient to have an algorithm based on the principle of “attention is all you need”, but it turned out that there are other things also that are needed for its smooth and sustainable operations. Secondly, it used LLM – a large language model, and if a device is based on such a model, it can generate false information and is prone to what are called Artificial Intelligence hallucinations. The device got entangled in realistic fakes and deepfakes on its own, and without any apparent reasons, and no expert could find a cure for the malfunctioning. The software had problems in interpreting the commands due to which the actions did not conform with the instructions. It was observed that while they could teach it feelings like anger, range, detest, revenge, vindictive persecution, they failed to teach it an element that was vital for its proper functioning and that was trust. Obviously, they could not rely on the system more than they trusted it. There was the real possibility of serious enough adverse events which could have irreversible consequences. Expert investigations were carried out and opinions sought but it became evident that there was no other solution but to recall this product, and let the huge investment involved go down the drain. But there was an issue with this decision of recalling the product; many entities had already bought this product which included institutions and individuals. They were used to it for the last many years and had developed an affinity for it and they did not want to abandon it because of the benefits they saw in it. What will happen to them? It was unanimously decided that the damage caused will be declared as collateral damage, and they will let them continue using it. The company is already known for building many products for national security and for the sake of national interest. But it is also infamous for recalling some of its products in the past. In particular, in the past it has recalled some of its rockets. But those rockets were meant to be used locally or were short range devices. This was the first time that they had to recall a full-scale long-range missile capable of carrying nuclear warheads. In earlier instances, the products were not so widely circulated, and their use was confined to specific regions and prescribed users only, unlike the present case where it was aggressively marketed far and wide. The writer is a faculty member at Quaid-e-Azam University, Islamabad. He may be contacted at ksaifullah@fas.harvard.edu