On 6 April, a reporter posted on Twitter that the former Finance Minister of Pakistan, Miftah Ismail “tried to escape Pakistan” and that he was “stopped at Karachi airport” after his name had been “recommended to be placed on the Exit Control List (ECL)”.The tweet was retweeted over a thousand times and “liked” over 4000 times. Shortly after, various activists and journalists stepped in to clarify that this information was “fake news”. Of course, this is not the first time “fake news” has been circulated, with potentially damaging effects on the reputation of a high-profile personality. There has been a similar pattern of misinformation and disinformation spread through broadcast media, as well as social media/online platforms. It was reported in February this year that the Information Minister of Pakistan, during a briefing to the Senate Standing Committee on Information, emphasized the need to “set up a new regulatory authority for all types of media”. However, the approach outlined thus far by the government suffers from serious shortfalls (which merit an entirely separate and focused discussion). Pakistan is not the only country in the world seeking to deal with technological advancements that have ushered in a whole new range of political, legal and security concerns. This is a broad area for much debate and discussion, particularly if we begin discussing the mainstreaming of Artificial Intelligence (AI). However, such a broad discussion would extend beyond the confines of space allotted here due to which it may be more beneficial to focus solely on the phenomenon of “fake news”. How do we understand this concept within a Pakistani setting? Are there identifying features that can help us distinguish between true, verified information, on the one hand, and false or manufactured information, on the other? These are questions we ought to answer and push for debate on regardless of progress (or lack thereof) on regulation of digital spaces. As a starting point, it is important to understand the distinction between different types of information, including “disinformation”, “malinformation” and “misinformation”. These terms have been defined quite aptly by Claire Wardle in “Information Disorder: The Essential Glossary” (a highly recommended read for anyone seeking clarity on these issues). Those calling for a centralized regulation authority are perhaps either overestimating the ability of traditional mechanisms to deal with non-traditional platforms for information sharing; or underestimating the severely negative impact such concentrated regulatory power can have on the right to freedom of expression Wardle defines “disinformation” as “false information that is deliberately created or disseminated with the express purpose to cause harm”; “malinformation” as “genuine information that is shared to cause harm”, including “private or revealing information that is spread to harm a person or reputation”; and “misinformation” as “information that is false, but not intended to cause harm”. It is important to bear in mind these distinctions when developing mechanisms to counter the spread of spread of false information. Earlier this month, Facebook announced its removal of hundreds of pages and fake accounts it alleged were linked with the Pakistani military, more specifically the Inter Services Public Relations (ISPR) (and also accounts connected with Indian political parties). It is no secret that the security establishment in Pakistan has run several vicious campaigns online, against bloggers, activists, journalists, members of the PTM and almost anyone remotely critical of its policies. It certainly isn’t alone in pursuing such campaigns: intelligence agencies and governments around the world adopt such tactics on the regular. What is interesting about these campaigns is the combination of information utilized. For instance, when the ISPR-linked accounts campaign against women critical of them, they tend to adopt an approach that utilizes a combination of “disinformation” and “mal information” juxtaposed against one another. A similar technique has been utilized in their anti-PTM campaigns. They circulate genuine images of women or members of the PTM alongside images and information they know to be false to intentionally create doubts and cast aspersions as to the character and motives of these persons. This technique has been recently mastered by ISPR-linked accounts, that then use “botnet” to amplify the spread of damaging information, as part of these targeted campaigns. Wardle defines “botnet” as “a collection or network of bots that act in coordination and are typically operated by one person or group”. This is then reinforced through what Wardle refers to as “manufactured amplification”, i.e. “when the reach or spread of information is boosted through artificial means”, including through human or automated promotion of specific hashtags on social media. It is often quite difficult to distinguish between inaccurate/false information and reliable/credible information, particularly in digital spaces that allow an even freer type of exchange of ideas and opinions than traditional media. There is no foolproof way to regulate these spaces and certainly no checklist on which boxes can be ticked off to help us gauge the accuracy of information available to us. What is required, at an individual level, is to develop an understanding of these concepts and the manner and ease with which untrue/false information can be disseminated, with impunity for those behind its spread. Those calling for a centralized regulation authority are perhaps either overestimating the ability of traditional mechanisms to deal with non-traditional platforms for information sharing; or underestimating the severely negative impact such concentrated regulatory power can have on the right to freedom of expression. There is no one-size-fits-all solution and the world is still struggling with how to deal with the novel and complex challenges stemming from technological advancements that have transformed the digital space into a potentially explosive zone, where elections can be manipulated, false information shared with no consequences for the sharers of this information and immense damage done to the credibility of any given individual or group. Instead of rushing to regulate, it would perhaps benefit our law and policy-makers to first, delve into identifying and understanding mechanisms (existing and proposed) around the world; and second, to carry out comprehensive consultations (taking on board all relevant stakeholders)and impact assessments relating to these mechanisms, paying particular attention to the international human rights framework. What we may believe is better regulation today may well prove disastrous and damaging to our very own interests tomorrow. The writer is a lawyer