In order to combat organised real-user accounts that engage in nefarious activity on its platform, Facebook has adopted the same technique its security teams use against malicious accounts by moving against groupings of real-user accounts with damaging actions, Reuters reported on Friday. The approach discussed here for the first time utilises the techniques that Facebook security teams have previously used for total network shutdowns of networks involved in operations to influence public discussion using bogus identities, such as Russian troll farms. The social media giant’s political and other coordinated movements may be affected in a large way because of this. Its practises of policing its platforms are under intense examination at a time when Facebook is getting flak from the international community and NGOs. Facebook has said that it will now adopt a similar strategy for groups of coordinated genuine accounts that systematically break its rules via mass reporting, a method of systematic false reporting that is used to get people’s accounts terminated. Facebook declared on Thursday that it will adopt the same strategy for tackling “organised social harm” on and off its platforms as it deals with the German COVID restrictions Querdenken movement’s alleged off-platform activities. The company’s security teams will be able to detect core movements fueling such activity, and the security teams will be able to take more sweeping actions than the company eliminating content or individual accounts. According to a leaked internal Facebook assessment published by BuzzFeed News in April, the social media giant had “little policy around concerted authentic harm” during the Jan. 6 riot on the US Capitol. After the 2016 U.S. presidential election, in which U.S. intelligence officials concluded that Russia had used social media platforms to spread disinformation and fake news as part of a cyber-influence campaign, Facebook’s security experts began cracking down on influence operations using fake accounts in 2017. Security teams at Facebook called the CIB, or coordinated inauthentic behaviour, and subsequently began issuing monthly security reports of their takedowns. Cyber-espionage networks and influence operations by state media are among the issues that security teams also address. According to people with knowledge of the situation, team members had been debating for a long time how the corporation might interfere at a network level for massive movements of real user accounts systematically infringing its rules. According to Reuters, the Vietnam military’s online information warfare branch (known as the Strategic Technical Directorate, or STD) has used its real names and bulk reporting of Facebook pages to undermine its opponents. Some accounts were removed from Facebook due to widespread reporting attempts. Facebook is feeling mounting pressure from both governments and employees to address the multitude of abuses that can be found on its systems. Some people have accused the firm of censorship, having a bias against conservatives, or uneven enforcement. A greater consideration for the consequences of modifications to Facebook’s network disruption models on the authenticity of accounts causes uncertainty regarding how it will impact the public discourse, online movements, and campaign techniques on both sides of the political spectrum. Evelyn Douek, a Harvard Law lecturer who researches platform governance, claims that harmful behaviour can appear to be similar to social movements. “This definition of hurt will decide it … although, clearly, people’s notions of harm might be hazy and fuzzy.” It has also prompted discussions on how social media platforms define and tackle coordinated campaigns, following high-profile instances of coordinated activity around last year’s U.S. election, from teens and K-pop fans who said they used TikTok to sabotage a rally for former President Donald Trump in Tulsa, Oklahoma, to political campaigns paying online meme-makers.