Misinformation, hatred and emotional exploitation have always been favourite tools of instigators, miscreants and cults. Recently revealed Facebook Papers indicate that social media has become an important channel for utilizing those tools. The Facebook Papers, scrutinized by more than a dozen news organizations, indicate a systemic problem at policy and technical levels. Frances Haugen, the whistleblower who disclosed those Papers who testified before the US Congress and British Parliament states, “The company (Facebook) was aware of the spread of misinformation and hate content worldwide, from Vietnam and Myanmar to India and US”. This spread of misinformation and hatred through digital means exists for Pakistan. There were 46 million social media users in Pakistan in January 2021, which is equivalent to 20.6% of the total population. With a temptingly growing market, Facebook seems to be laser-focused on growth and profits instead of regulating the user-generated content, the real source of its income. A Next Billion Network multi-country study reveals that Facebook dismissed ‘large volumes of legitimate complaints, including death threats, in Pakistan, Myanmar and India. An audit posted by a Facebook employee in June 2020 mentioned a ‘massive gap’ in coverage of ‘at-risk countries’ and that the Facebook algorithm couldn’t parse the local language in Pakistan, Myanmar and Ethiopia. Facebook also needs to listen to ‘recommendations’ given by its own staff to protect minorities and vulnerable segments from hate globally. Facebook’s community standards were not even translated into Urdu, the national language of Pakistan. As a result, the set of rules regarding misinformation and hate content could not even be applied or employed in what was distributed over social media in Pakistan. As with many global trends, major social media networks have changed the manner of communication and conversation leading to a change in perception and the thinking of people. For the average user, credible information is not what major news outlets are saying, but updates by his or her network friend or follower. Whether it is a successful product, business, service or a falsely crafted propaganda piece on social, many people usually accept it on its face value without critical examination. Given the foregoing, it is now essential and possible to connect all of the dots to explain why fake news and hate speech can spread so quickly. Some researchers including CITS (Centre for Information Technology and Society) and NCBI (National Centre for Biotechnology Information) conclude that it is a mix of trolls, BOTs, and common users that combine to accelerate the spread of false and hate content. Trolls are individuals who have accounts on social networks for only one purpose: To spread content that incites people, insults others including public figures, and presents damaging and incendiary ideas. They are ‘masters of disaster’ on social platforms when it comes to propagating hatred and fake news. BOTs are software programs designed to simulate human behaviour on social platforms and to repetitively interact with other social media users. CITS reveal that there were around 190 million BOTs on Facebook, Twitter and Instagram by 2018. Common users are those who use Facebook and social media with great frequency. In conjunction with trolls and BOT’s, they become carriers for the messages of fake news and hatred. Many common users believe what they are reading or viewing nearly continuously is true. So, they pass it along to others with whom they communicate on social media. This pattern is true in Pakistan on political, social, ethnic and other controversial issues. A Pakistani non-governmental organization Bytes for All has been monitoring online hate speech in social media since September 2019 and has recorded several incidents in this regard. Over the years, Facebook, Twitter and other big techs have taken some steps intended ‘to tighten security and remove viral hate speech and misinformation.’ They include expanding the lists of derogatory terms in local languages, removing fake accounts and BOT’s, and improving their ‘deep learning algorithms. Whatever technical adjustments are there for Facebook and other platforms, however, no excuse or explanation is sufficient enough to compensate for a human life lost due to false or hate content. As I have suggested earlier, governments, especially the US government, need to be proactive in exercising more control over social media. Strong action by the US, the home of Facebook and other social media and tech giants, will have a worldwide impact. Here in this region Governments such as Pakistan and India have been working on new laws for social media giants. Some of them include having an office presence in Pakistan and having an Indian resident on staff to collaborate with local authorities. Facebook also needs to listen to ‘recommendations’ given by its own staff to protect minorities and vulnerable segments from hate globally. “Quality of content” and the “accuracy of content” should be among the key themes for algorithms employed to promote or recommend content such as that seen on search engines, not the number of “likes, shares or comments.” “If we promote an environment where everyone, regardless of religion, ethnicity or gender, can participate, we believe we can create harmony. This will put us on the path towards sustainable development,” said Haroon Baloch, Senior Program Manager and digital rights researcher at the Pakistani organization Bytes for All. This is sound advice. It should be followed and built into the algorithm to curb hate speech on social media. This will help make Pakistan and the world more collaborative and harmonious. The writer is an Entrepreneur, Civic Leader, and Thought Leader based in Washington DC.