Meta unveils detailed strategy for managing US elections 2024

Meta

Meta has detailed its strategy for political advertising in the run-up to the 2024 US presidential election. As in previous elections, the company intends to block new political advertisements during the final week of the campaign because it is too difficult to verify the veracity of claims in such a short period of time. Ads that are already running, however, will be exempt.

Earlier this month, the company announced that it would prohibit advertisers from using its own generative AI software—which generates background images and image adjustments—for political ads on Facebook and Instagram and that additional restrictions would be imposed.

Furthermore, Meta has announced that it will require advertisers worldwide to disclose whether they have used AI or related digital editing techniques to “create or alter” a political or social issue. “This applies if the ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” said Nick Clegg, president of Global Affairs.

“It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Other countries and regions, including India, Indonesia, Mexico, and the European Union, will hold elections next year in addition to the United States.

“While we are conscious that every election brings its own challenges and complexities, we’re confident our comprehensive approach puts us in a strong position to protect the integrity of next year’s elections on our platforms,” writes Clegg.

Meta tackles misinformation, hate groups, and election integrity with robust strategies

Meta claims to have taken down over 200 malicious influence campaigns and identified over 700 hate groups. State-controlled media is labeled on Facebook, Instagram, and Threads, and ads targeting people in the United States are blocked.

“We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” says Clegg.

“We’ve also built the largest independent fact-checking network of any platform, with nearly 100 partners around the world to review and rate viral misinformation in more than 60 languages.”

Meta, like many other platforms, has faced criticism in the past for how it handled election claims, particularly during and after the 2016 U.S. presidential elections. After initially banning Donald Trump in 2021, it reinstated his account at the start of the year, despite his previous claims that the 2020 election result was invalid.

Earlier this year, the independent Facebook Oversight Board announced that it was looking into the moderation of a speech by a prominent Brazilian general, calling for people to “hit the streets” in the wake of President Lula da Silva’s inauguration, along with posts from Cambodia that it said potentially incited violence.

Exit mobile version