My opinion on social media is quite skewed. The reader finding this article must have found through some medium. How else do we live in the world of the internet, if we don’t use it to reduce the encumbrances of physical distance? The point of contention here is not that the services of social media are free or the ethics around it, but the inevitability of social media platforms zeroing in on censoring views or the labelling of certain pieces of information as “misinformation”. Quite a bit has been said about fact-checkers and there is possibly nothing more I can add to the angst shared. Facebook, Twitter & other social media platforms have been labelled the arbiters of freedom of expression and dystopian scenarios have been imagined.
I contend that this is inevitable based on how social media is constructed. Selective speech being allowed is part of a bigger question. Unfettered free speech would include slurs to be allowed. Speech without action should be fine, if we allow unfettered freedom to speak what’s on our minds. If we were to take the case of someone who considers himself an ubermensch, there’s nothing to stop this person to come out and openly say why those around him are inferior to him. This same person can choose the choicest of expletives to insult the ones around him. Pure anarchy or pure totalitarianism do no good here. If there is an agreement on the fact that unfettered speech is bad for society as a whole, it becomes a moral question as to how much regulation protects the society. Before we get to the morality of it, one needs to also articulate what the word “society” means in this context.
One need not delve into a sociological or anthropological line of thought to define “society” in this context. It is simply the userbase of the platform. Anyone remotely familiar with the censorship of social media platforms or the topic should also know that the platforms originating in the USA are protected by Section 230 of the FCC act. A platform cannot be held legally liable for anything a user says. However, platforms have to also be able to ensure their users have an experience that brings them back to the platform. This necessitates the community guidelines. The question boils down to this – does a person have the right to lie on a platform? Do we have the right to lie even without social media? To look at it as a right is erroneous. To lie deliberately is a choice one makes. It’s a matter of freedom, not a right. But transitive reasoning suggests that if we have the right to freedom and the freedom to lie, we must have the right to lie.
This opens the pandora’s box of what lying means. We confuse the terms truth and fact. I posit that they aren’t the same. Truth is a condition of a set of propositions. If all men are humans and Socrates is a man, Socrates becomes a human. To conclude that statement is a condition of truth. If the propositions were altered to say, “If all men are batteries and Socrates is a man, Socrates is a battery”, it is still a statement that is true. That doesn’t necessarily mean it’s a fact. The second example is obviously non-sensical. It’s non-factual. How do we ascertain that? The word “battery” means something else. That definition is agreed upon by the consensus. We perceive reality based on such consensus opinions. Consensus is a function of society. There will be strata in society, and if stratification of society is incentivised, the mechanism of drawing boundaries is bound to become as simple as possible, by virtue of iterative evolution.
To a social media platform, a user is fundamentally a matrix of quantified data. How we use the platform is stored & understood by the software, but at the end of the day it’s still a computer that understands only stochastic systems – finite outcomes. Machine Learning and Artificial Intelligence allow for these stochastic processes to expand the possible finite outcomes beyond the initial set, but it still has limitations. It will still look at a person’s action vis-à-vis a definitive cause-effect framework. Maybe some attractive text or a great discount caused someone to make a purchase. Studies of human action go against this line of thought with empirical evidence. Humans act within the set of options they see. The stimulus provided isn’t the exhaustive list of options the acting human perceives. The way opportunity costs are perceived are not going to be the same either. To reduce human action down to primal cause-effect relations is not a bug of systems. A good system evolves to simplify its effort to be roughly accurate.
When a digital reformulation of a human user is done, the incentive is to get straight away to maximising the network effects. Since the algorithm learns from the way the user interacts with it, it gets better and better as it gets used. The goal is straightforward – give the user the best experience to keep them hooked on to the network. Since the monetisation is done through running targeted ads, the added incentive is to ensure the users use the network as much as possible in a day. Much like the simplicity craved by systems as they evolve, the human brain is no different. The human brain has shortcuts to thinking to arrive at decisions called heuristics. If a social media network can leverage the user’s heuristic pathways of thought, the user would think they’ve made their decisions through their own free will. The only problem is that heuristics are generally driven by primal cognitive biases developed through millennia of evolution.
Catering to biases means that people will largely fall into binary groups. Humans do better as a society than as individuals. Society is formed by relationships built between individuals on the basis of kinship. These relationships give our existence meaning and purpose. It is no surprise that nearly every religion known convergently concurs on the idea that human existence is to serve those around us and to find meaning in life. If humans were to be segregated into binary groups, kinships are formed on these binary outcomes. There will be groups of people agreeing on something while another group disagreeing with the same proposition. Unfettered social developments tend to favour the majority views until someone draws the line. And if a social media network is truly unfettered, it will favour the majority view. The important caveat here is that it will be the majority among the users, not the ones outside of it. However, social media websites keep spilling over beyond their online digital existence into the world offline to them. Cognitive biases frame the opinions of people using the network and subsequently, affect the people in the circle of influence of the users. If a democracy needs a thinking populace to act, social media networks convert users into primal beings that respond to stimuli as if it were a univariate causality.
In 2020, any opinion on the SARS-CoV2 being a virus that originated in a lab in Wuhan were put down. This was called editorialising and censorship but upon deeper inspection, one needs to understand something. The human brain can discern the difference between truth and fact. A system or a software cannot discern this no matter how big a difference there is. A network gets better and better in serving the users through network effects. When something significant happens in the world, every news article covers the event. These offer some sense of legitimacy as to what the facts are. These facts could be true within the system they exist in. In the era of cognitive biases, when a statement gets confirmed by multiple sources concurrently, the facts presented which get corroborated form the truth. Since it is a binary outcome system, anything that disagrees with it becomes false or in other words, misinformation.
If something gains the status of misinformation, a system designed to keep users hooked has a greater incentive in showing the misinformation as false, than the truth as the fact. We as humans react very differently to both the cases if we act through our cognitive biases. If users see something as misinformation, censoring of that opinion’s spread becomes the next goal of using a network. The society on the network deems something as false. Since users have the option to report and flag information as false, no one really has to censor information in a central office unless it’s a totalitarian regime. With the senate hearing confirming that there must’ve been a leak from the Wuhan lab, the opinion became allowed on the network. It must be noted that it didn’t happen so because this opinion is likely to be true. It’s more so because the other option is more likely to be false.
It all comes down to the question of asking whether or not the social media networks are right in censoring an opinion or to ban someone for their speech. It is never a good idea to legislate our way out of this. Government changes, the people in the governments change. We’d be far better off treating social media networks as privately run public utilities and not as tech platform companies. If companies are required to disclose to consumers the information on who the buyers of the behaviour data are, it still leverages the best of social media in nullifying the effects of physical distance without amplifying the destruction caused by letting humans devolve to using only their heuristics to act. As far as censoring is concerned, we all self-censor what we say to a great extent in our real world. But since social media networks thrive in users using them a lot, there is a greater incentive to show to a user opinions he/she disagrees with along with those he/she agrees with than either alone. The majority view through corroborated facts becomes the truth on that network. It doesn’t matter whether or not the managers of the network step in. Second order effects, i.e. outcomes of outcomes, do the work for them. Censorship is inherent in a society. In a digital society, it’s no different.
To indict social media and to try them as arbiters of truth is a bad idea. To ban them entirely is even worse. Social media does as much destruction as it does good. The fundamental objective should be to minimise the destruction. I don’t know how, but it surely has to be the way forward.
An analytical essay on how the posts on social media evolve and gain weight. Good read.
LikeLike
Hi, you said the human brain can distinguish fact and truth. that is truth. but it is not wholly self-sufficient. it take up something that exists and put its forward, or mixes it with another thing and puts it forward. so when we think we are being wholly objective we are still relying on our faulty imaginations to bring up something that is partially inspired by something that already exists. what i am trying to put forward is that humans are not completely self guided-morally and in all matters of truth. a german philosopher said that (i am paraphrasing)- our lives are dependant on external things. So our pathways of thought can easily be shaped by competent advertising and marketing gimmicks ergo evolutionary sensibilities.
good essay by the way. looking forward to your works
LikeLike
Hi Natesan, these pathways of thought were originally assumed to be manipulated. Check out the theory by Daniel Kahneman and Amos Tversky on behavioral psychology. This specifically talks about heuristics and shortcut thinking mechanisms we’ve evolved to use. For further reading on the topic, you may also try Human Action by Ludwig von Mises and The Psychology of Intelligence analysis by Richards Heuer. On logic you can try Principles of Mathematics by Bertrand Russell, Process and Reality by A N Whitehead. These were my references.
LikeLiked by 1 person