Many of us have come across dodgy fake news on Twitter, but according to a new study, offering corrections may only make the problem of misinformation worse. Researchers at the University of Exeter and MIT Sloan in Massachusetts performed a experiment on the site using specially-created accounts. In replies to ‘flagrantly false’ tweets other users posted about politics, they offered ‘polite corrections’ with links to solid evidence. But they found this had negative consequences, leading to even less accurate news being retweeted and ‘greater toxicity’ from those being corrected. Misinformation has been a constant issue for social media giants including Twitter and Facebook – particularly in the last year regarding coronavirus and vaccinations. Twitter has removed more than 8,400 tweets and challenged 11.5 million accounts worldwide due to Covid-19 misinformation, it revealed in March. But according to the lead author of the new study, Dr Mohsen Mosleh at the University of Exeter Business School, the findings were ‘not encouraging’, as it suggests one of the tools for combating misinformation doesn’t actually work. The researchers think people should be wary about ‘going around correcting each other online’. ‘After a user was corrected they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language,’ said Dr Mosleh. To conduct the experiment, the researchers identified 2,000 Twitter users, all of whom had a mix of political persuasions and had tweeted any one of 11 frequently repeated false news articles. All of those articles had been debunked by Snopes, a website that describes itself as the internet’s ‘definitive fact-checking resource’. Examples include the incorrect assertion that Ukraine donated more money than any other nation to the Clinton Foundation, and the false claim that Donald Trump, as a landlord, once evicted a disabled combat veteran for owning a therapy dog. The research team then created a series of Twitter bot accounts, all of which existed for at least three months and gained at least 1,000 followers, and appeared to other Twitter users to be genuine human accounts. Upon finding any of the 11 false claims being tweeted out, the bots would then send a reply along the lines of, ‘I’m uncertain about this article – it might not be true. I found a link on Snopes that says this headline is false.’ The reply also linked to the correct information.