New forms of conflict, below the threshold of physical violence, are spreading on digital, content-sharing platforms, such as social networks, video sharing sites, and messaging services. At its peak, there were around 200 million monthly engagements with misinformation stories on Facebook1. Many believe this “fake news” has influenced election results, and spread misinformation about Russia’s invasion of Ukraine, for example.
Digital platforms hosting this content are usually created and administered by private organizations, such as Facebook and Instagram, which shape online environments, working under a range of economic incentives and constraints that are rarely explored in security studies. Among the common design choices made by developers of digital platforms is the leveraging of so-called network effects, where the value of being a user is greater the more users there are on the network. Ideally a platform will present a large amount of content that users want to consume, while stimulating them to create new content that others will want to consume.
Unfortunately, network effects make these platforms equally attractive to canny and adaptive information manipulators. Because of this, content-sharing platforms need to perform effective content moderation – the process by which they review and remove content that violates their terms of service. Many moderation activities are labour-intensive, and therefore not easily scalable, while those that are scalable rely on imprecise algorithms.
Business model as an open door
The typical business model for digital platforms involves first growing the user base to maximize network effects. A platform needs to find the right mix of users – both content creators and advertisers. Different types of users pay different prices. Typical users may use the platform for free while those who produce especially attractive content may be subsidized, while users who get the most benefit from the platform –that is, advertisers – are required to pay. In addition to managing network effects and pricing, digital platforms exploit economies of scale through the deployment of labour-saving software.
Looking more closely, we see some common design choices that can work to turn these platforms into arenas of conflict. For example, they often provide highly efficient creation tools and search algorithms. These tools amplify network effects, increasing the chances that certain content will 'go viral', i.e., achieve a high level of awareness due to shares and exposure. This effect is easily exploited by activists advancing fringe agendas, who in turn elicit more hostile reactions from users who disagree.
A lack of moderation leads to user polarization, user polarization leads to criticism of moderation by one faction or another, criticism of moderation leads to a reluctance to undertake moderation, and so on.
All this puts content-sharing platforms in a difficult position; insubstantial or ineffective moderation can negatively affect users and the wider public, while choices about what to allow, what to remove, what to down-rank or ban, often involve value choices, leaving moderators open to accusations of bias. A vicious circle ensues; lack of moderation leads to user polarization, user polarization leads to criticism of moderation by one faction or another, criticism of moderation leads to a reluctance to undertake moderation, and so on.
Legitimacy and fairness
The sheer volume of content shared on digital platforms makes it difficult for operators to effectively moderate that content, while nefarious actors can quickly adapt their strategies, using tactics such as coordinated inauthentic activity.
Examples of groups that pose threats through the dissemination of manipulated information include home-grown political fringe groups in democratic states, such as climate change sceptics, or the state apparatus itself in authoritarian states. Across borders, intelligence and propaganda arms of various states may undertake such activities against other states.
Recent cited cases of online information manipulation include Russian attempts to destabilize the 2016 US elections, and the mix with fake news in the 'Macron leaks' in 2017, during the French presidential elections. In the context of the Russian-Ukrainian war, disinformation about alleged biological warfare labs in Ukraine have been disseminated on social media by covert networks, presumed to be Russian and Chinese. Anti-vaccination propaganda has also been spread since the beginning of the Covid-19 pandemic.
Is content moderation sustainable for social media’s business?
Content moderation must fit into a platform's business model. Today, very large portions of the largest platforms’ workforces are engaged in content moderation, and it is not clear that this can be sustained, given existing revenues.
The long-lasting financial struggles at Twitter can be understood as the difficulty of a relatively small platform to remain profitable while providing adequate moderation. The new management by Elon Musk seems to be gambling on drastically reducing all costs, including moderation costs, hoping that revenues will not drop too much as brand advertisers, who do not want to be associated with a no-moderation platform, take their leave. Whether this is possible remains to be seen.
When a platform does decide to go forward with moderation activities, developing the actual procedure can still be quite daunting. First, consensus must be created as to how to treat political or ideological content. There must be clear and established legitimacy behind any guidelines and criteria. In that respect, regulation, when instituted through a democratic process, can be very helpful. And with the European Union's Digital Service Act (DSA) coming into force, regulators in the EU will be empowered to mandate a minimal level of moderation.
The EU Digital Service Act
Ethics is a subjective and relative philosophy. In today's open and free democracies, where one person's truth may be another person's fake news, the key issue is how to ensure that content moderators are not just censors of unpopular views. Any framework needs to be supported by a democratic process to be legitimate. The DSA is very much about the means of moderation that platforms need to provide or invest in – the 'how to moderate', rather than the 'what to moderate', presuming the former will be easier to agree on than the latter.
The DSA, it is hoped, can ensure a minimum level of moderation across all platforms, but it may also serve to entrench the very largest platforms, which are the only ones likely to remain profitable while performing the requisite moderation. Thus, in the future, we may see more concentration at the top; new platforms may need to remain small, following models in which moderation is either not necessary or impossible, to avoid the associated costs.
1Fake News Statistics - How Big is the Problem? | Business Tips | JournoLink.