It's a topic I've been thinking about for some time, but without ever giving it the attention it deserves. I was rereading news stories such as the fine imposed by the EU on “X” (aka Twitter) on the subject of the “blue tick” and the lack of transparency in how its meaning is managed. I read about how the aforementioned platform had implemented a revolutionary feature that highlighted how political propaganda sites had non-local origins. Or how social platforms steer public opinion based on the number of likes/shares accompanying a post.
Then, periodically, sensationalist articles appear describing the platforms as “plagued by bots” and fake profiles, playing the victim card accompanied by a desire to show their commitment to building a better platform, where community rules are managed, which, even today, are not very clear.
Shall we try to tell the truth? These are basically false statements, technically unsustainable. If we wanted to take a broader view, we could even say that it is an insult to their audience.
However, these are statements that are convenient, enticing, and keep people hooked on social media, playing on the dopamine addiction typical of gambling, smoking, or drugs.
Where the saying “where there's a law, there's a loophole” can be applied in social terms, the Internet is based on rules that cannot be circumvented. It would be a bit like trying to defy the laws of physics by attempting to fly a plane without taking into account gravity, aerodynamics and all the other necessary elements.
Returning, therefore, to the “whining” of social media victimhood, it is possible to significantly reduce the number of “bots” that infest the foundations of platforms like termites. It would be enough to analyse the traffic and “behaviour” of accounts to stop the phenomenon. Several key elements could easily be used to restrict access to platforms from sources that cannot be human: data centres, search engines, click farms.
This is not done for one simple reason: convenience.
To be honest, futuristic technologies are not even necessary. The use of “artificial intelligence” to limit access by “fake” profiles, in fact, has been shown to do more harm than good, blocking access to users, professionals, micro-influencers or influencers who, on the contrary, suffer real damage without having anyone to turn to.
It is convenient, however, for those who offer illegal services (without officially being so, since there are no rules on the matter) such as the aforementioned click farms, streaming farms or any other service that allows you to create paid “engagement”.
What would an “X” be like without profiles that copy/paste other people's content? Strictly without attribution because, in the age of social networks, we have realised that copyright can go to hell (to put it bluntly). What would Instagram be like without comments consisting only of emojis or “Great post!” thrown in at random?
If, on the one hand, “lots of bots” equals “lots of traffic”, inviting people to participate in an expanded community where they can have their say and seek an echo for their ideas, on the other hand, the possibility of exploiting the tool professionally collapses dramatically. Your visibility collapses under the weight of an unmanageable amount of content, algorithms that only promote what management wants, and AI that rages and cannibalises the cannibalizable.
On the other hand, who benefits is (on paper) the platform itself, which can, according to its own claims, sell even more advertising, given that the algorithm is able to propose content even to real, flesh-and-blood users. On this point, however, one might wonder whether social media managers (or so-called managers, since they do not “manage” anything) have ever asked themselves a simple question: does what Meta or Google's analytics platform tells me correspond to reality? We have asked ourselves this question, and we couldn't even find half of what Meta claims in the evidence.
I do not believe there is a definitive solution to the “bot” problem, especially if we apply the philosophy that “the only true obsessive-compulsive innovators are fraudsters”. Nevertheless, to claim that it is a problem so widespread that it is uncontrollable is false. Those who claim this do so knowingly with the intention of influencing the opinions of others. If you have millions or billions at your disposal in terms of technology and resources, there are three possibilities: you are incapable of doing it, you have no idea how what you have created works, or you do not want to do it (with an obvious preference for the third).
It would therefore be appropriate to ask: are companies driven by voracious profits really interested in tackling a problem as old as the Internet?