As a worldwide election season that is widely predicted to be riddled with misinformation and falsehood approaches, the major US-based tech platforms are rolling back regulations designed to combat them, raising concerns. Whether it’s YouTube removing a crucial misleading policy or Facebook changing fact-checking measures, the social media behemoths are behaving erratically as the internet’s sheriffs. The adjustments have occurred in the context of layoffs, cost-cutting measures, and pressure from right-wing groups accusing companies like Facebook parent Meta and YouTube owner Google of stifling free speech.
As a result, digital corporations have relaxed content moderation procedures, reduced trust and safety teams, and, in the case of Elon Musk’s X (previously Twitter), restored accounts renowned for spreading false conspiracy theories. Researchers claim that these measures have weakened their ability to combat what is projected to be a flood of misinformation during more than 50 key elections around the world next year, including those in India, Africa, and the European Union. “Social media companies aren’t ready for the 2024 election tsunami,” claimed the Global Coalition for Tech Justice in a study earlier this month. “While they continue to count their profits, our democracies are left vulnerable to violent coup attempts, venomous hate speech, and election interference.”
Twitter, now known as X, announced that it will no longer enforce its COVID misinformation policy
In June, YouTube said it would stop removing content that falsely claims the 2020 US presidential election was plagued by “fraud, errors or glitches,” a move sharply criticized by misinformation researchers. YouTube justified its action, saying that removing this content could have the “unintended effect of curtailing political speech.”
In November, Twitter, now known as X, announced that it will no longer enforce its COVID misinformation policy. Since billionaire Musk’s tumultuous takeover of the platform last year, it has restored thousands of accounts that had previously been suspended for breaches such as distributing misinformation and established a paid verification mechanism, which academics claim has boosted conspiracy theorists. Last month, the platform announced that sponsored political advertising from US candidates would be allowed, reversing a prior restriction and raising fears about disinformation and hate speech in next year’s election.
“Musk’s control over Twitter has helped usher in a new era of recklessness by large tech platforms,” said Nora Benavidez of the independent organization Free Press. “We’re observing a significant rollback in concrete measures companies once had in place.” Platforms are also under pressure from conservative US advocates who accuse them of colluding with the government to censor or suppress right-leaning content under the guise of fact-checking. “These companies think that if they just keep appeasing Republicans, they’ll just stop causing them problems when all they’re doing is increasing their own vulnerability,” said Berin Szoka, president of TechFreedom, a think tank.
For years, Facebook’s algorithm placed posts lower in the feed if they were detected by one of the platform’s third-party fact-checking partners
For years, Facebook’s algorithm placed posts lower in the feed if they were detected by one of the platform’s third-party fact-checking partners, such as AFP, decreasing the visibility of false or misleading content. Facebook has granted US users the ability to elevate this material higher if they so desire, in a potentially major shift that the network claims will offer people more influence over its algorithm. Because of the country’s hyperpolarized political climate, content regulation on social media platforms has become a contentious subject. Earlier this month, the US Supreme Court temporarily blocked an injunction limiting President Joe Biden’s administration’s power to contact social media companies and request that misinformation be removed.
That judgment was issued by a lower court of Republican-nominated judges, who ruled that US officials went too far in their efforts to have platforms suppress particular messages. Misinformation researchers from prestigious universities like Stanford Internet Observatory are also facing a Republican-led congressional investigation, as well as lawsuits from conservative groups accusing them of encouraging censorship, which they deny. Downsizing in the tech sector has decimated trust and safety teams, and limited access to platform data has exacerbated their problems. “The public urgently needs to know how platforms are being used to manipulate the democratic process,” Ramya Krishnan of Columbia University’s Knight First Amendment Institute told AFP. “Independent research is crucial to exposing these efforts, but platforms continue to get in the way by making it more costly and risky to do this work.”