Twitter reports having less than 5% of the total accounts that are spammers or fake, as the term commonly used to describe them is “bots.” However, since Elon Musk has seen his offer to buy Twitter accepted, he has frequently interrogated these evaluations, even rejecting the CEO Parag Agrawal’s public answer.
Later on, Elon put the record $44 Billion deal on hold and asked for more proof to better understand the actual situation. However, why do people argue about the bot account percentage on Twitter?
The developers of Botometer, a popular bot detection tool, have been observing the trends on Social media and the effects of these fake accounts in manipulation for more than a decade at the Indiana University Observatory.
The group introduced the idea of a “social bot,” and they were also the first to estimate their dominance on Twitter back in 2017. As per their experience, estimating the total count of Bots on Twitter is quite difficult, and arguing over their exact number may be a fruitless attempt without much accuracy.
What are bots?
Bots aren’t fake accounts impersonating people; neither are they spammers mass-producing unsolicited content for promotions. Bots are software-controlled accounts that may post simple content or perform some interactions, such as automatic retweets.
There are various types of bots that you will encounter on the platform. It includes Fake and spams ones that usually harm the online environment while violating the platform’s policy.
Malicious bots spread misinformation, influence elections, disrupt communication, create conflicts, manipulate opinions, etc. Although, there is a harmless or even useful category of bots that may be used to spread trending or emergency news.
So, we can say that banning all bots might not be the best choice for users. Researchers and even Twitter use the “inauthentic accounts” term for simplicity, although we don’t know Musk’s stance on it yet.
Why Twitter bots are hard to count?
The technical difficulties remain the same in counting the total count of bots on the platform. The major problem arises because the external researchers don’t have access to Twitter’s data such as phone numbers and IP addresses. Thus, it hinders the ability to identify inauthentic accounts.
Since these accounts usually develop new techniques to evade the detection, it could be challenging even for Twitter to find the actual bot count on the platform itself.
Another challenge arises with the coordinate accounts, which act like a normal individuals but act similar to another account which means that they are controlled by one entity.
These accounts can also evade various detection techniques such as auto-posting and deleting many contents or swapping handles.
Overlooking the Main Issue:
One concern here could be that arguing about the bot count misses the major point of calculating the damage of online exploitation and manipulation by these fake accounts.
BotAmp, the latest tool from the Botometer creators, found that the majority of the users with a Twitter account have some extent of automated activity evenly spread. Although the overall presence is around 5% to 20%, it makes little difference to the users as their experience will depend on the people they follow and the topics they notice.
Evidence suggests that these accounts may not be the only culprits for spreading misinformation, polarization, radicalization, and hate speech.
So, even if there were a probability of precisely estimating the presence of these inauthentic accounts, it would only be the tip of the iceberg. The start is to acknowledge the complicated nature of the issue, which will help the policymakers and social media platforms to create important responses.