In a blog post, Twitter’s Head of Safety & Integrity, Yoel Roth, said the company is launching a new “crisis misinformation” policy. It is to address “situations of armed conflict, public health emergencies, and large-scale natural disasters”. The new policy comes as Twitter is in the process of acquiring Tesla CEO Elon Musk. He has expressed his opinions on ‘content moderation’ in several tweets and posts. Musk has also stated that the contract with Twitter will not be finalized until the platform certifies the number of bots or false users; Twitter claims 5%, which Musk does not believe.
“Crisis” is defined by Twitter as situations in which there is a widespread threat to life, physical safety, health, or basic subsistence”. It goes on to say that it will rely on “verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.”
Warnings
According to the blog, the Twitter crisis misinformation policy will be a global policy. It will “help to ensure viral misinformation isn’t amplified or recommended” by the platform during emergencies. According to the statement, as soon as Twitter gets proof “that a claim may be misleading, we won’t amplify or recommend” this content across the platform.
This includes displaying it on the app’s or website’s Home timeline, Search, and Explore sections. Twitter will “prioritize adding warning notices to highly visible tweets and tweets from high profile accounts, such as state-affiliated media accounts, verified and official government accounts,” that include misleading information.
Tweets that break the crisis misinformation policy will be accompanied by a warning that reads:
“This Tweet violated the Twitter Rules on sharing false or misleading info that might bring harm to crisis-affected populations. However, to preserve this content for accountability purposes, Twitter has determined this Tweet should remain available.”
To be clear, Twitter will not remove the potentially inaccurate content; rather, it will limit its reach.
Examples of content that feature the Twitter warning for incorrect content or misinformation, according to the blog post, include:
Examples
- False coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves;
- False allegations regarding the use of force, incursions on territorial sovereignty, or around the use of weapons;
- Demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations;
- False information regarding international community response, sanctions, defensive actions, or humanitarian operations.
- Strong commentary, efforts to debunk or fact check, and personal anecdotes or first-person accounts do not fall within the scope of the policy.
To be clear, Twitter will not remove the potentially inaccurate content; rather, it will limit its reach.
Examples of content that feature the Twitter warning for incorrect content or misinformation, according to the blog post, include:
So, what happens when a piece of misinformation is flagged by Twitter? After clicking through the warning window, users will still be able to see. The content, however, will not be amplified or recommended across the service”. Twitter will also remove the ability to like, retweet, or share that specific piece of material.
“We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events,” adds the blog post.
The first iteration of this policy is focusing on international armed conflict, starting with the war in Ukraine. Twitter plans to “update and expand the policy to include additional forms of crisis”.
“The policy will supplement our existing work deployed during other global crises, such as in Afghanistan, Ethiopia, and India,” the company said.