Over the last 11 months, someone built thousands, if not hundreds of thousands, of false, automated Twitter accounts to promote Donald Trump. The bogus accounts mocked Trump’s detractors from both parties and attacked Nikki Haley, the former South Carolina governor and United Nations ambassador who is challenging her former boss for the Republican presidential nomination in 2024. When it came to Ron DeSantis, the bots aggressively claimed that while he couldn’t beat Trump, he would make an excellent running mate.
As Republican voters weigh their options for 2024, whoever created the bot network is attempting to put a thumb on the scale by utilizing online manipulation techniques pioneered by the Kremlin to sway the digital platform conversation about candidates while maximizing their reach through Twitter’s algorithms.
Researchers at Cyabra, an Israeli tech startup that shared its findings with The Associated Press, discovered the massive bot network. While the identity of the people behind the phony account network is unclear, Cyabra’s investigators assessed that it was most likely developed in the United States.
To identify a bot, researchers will look for patterns in an account’s profile, follower list, and content. Human users often post about a range of themes, with a mix of original and republished content, whereas bots frequently post repeating content about the same topics.
That was true for several of the bots identified by Cyabra.
Bots are automated accounts that became notoriously well-known after Russia used them to meddle in the 2016 election
Bots, as they are popularly known, are phony, automated accounts that became notoriously well-known after Russia used them to meddle in the 2016 election. While huge tech companies have improved their identification of bogus identities, the network revealed by Cyabra demonstrates they remain a powerful factor in affecting online political debate.
The new pro-Trump network consists of three distinct networks of Twitter accounts, all of which were formed in large batches in April, October, and November of this year. Researchers suspect that hundreds of thousands of accounts may be involved.
All of the accounts have personal images of the supposed account holder as well as a name. Some of the identities produced their content, often in response to genuine users, while others republished real users’ stuff, thereby amplifying it.
The percentage of postings regarding any particular topic created by accounts that look to be phony is one technique to gauge the influence of bots. Most online debates have a percentage in the low single digits. Twitter has stated that less than 5% of its daily active users are phony or spam accounts.
Cyabra researchers investigated unfavorable posts about specific Trump detractors, they discovered significantly higher levels of inauthenticity
But, when Cyabra researchers investigated unfavorable posts about specific Trump detractors, they discovered significantly higher levels of inauthenticity. For example, over three-quarters of the nasty posts about Haley were linked back to false identities.
The network also helped popularize a push for DeSantis to join Trump as his vice presidential running partner, which would benefit Trump by avoiding a potentially contentious confrontation if DeSantis entered the race.
Researchers discovered that the same network of accounts disseminated highly positive content about Trump, contributing to an overall false picture of his support online.
Gross found the triple network after analyzing Tweets about various national political personalities and noticing that many of the accounts producing the content were created on the same day. The majority of the accounts are still active, despite having a small number of followers.
A message left with a Trump campaign spokesman was not immediately returned
According to Samuel Woolley, a professor and misinformation researcher at the University of Texas whose most recent book focuses on automated propaganda, most bots aren’t designed to persuade individuals, but rather to magnify specific content so that more people see it.
Bots can also persuade people that a politician or issue is more or less popular than it is, he claims. More pro-Trump bots can lead to people overstating his popularity generally, for example.
Until recently, most bots could be spotted by their bad writing or account names that featured nonsensical words or long strings of random numbers. Bots become more sophisticated as social media sites became better at detecting these accounts.
One example is cyborg accounts, which are bots that are periodically taken over by a real user who can publish unique content and reply to users in human-like ways, making them far more difficult to detect.
Bots may become considerably more cunning as artificial intelligence progresses. Emerging AI systems can generate more realistic profile photographs and posts that sound more authentic. According to Katie Harbath, a fellow at the Bipartisan Policy Center and a former Facebook public policy director, bots that speak like real people and use deep fake video technology may pose new challenges to platforms and consumers alike.
Bots are likely to have a long future in American politics, both as digital foot soldiers in online campaigns and as possible difficulties for voters and candidates attempting to defend themselves against anonymous online attacks.