Share this Post
Platforms and the AI used on them to curate information feeds have significant impact on the public sphere. One major concern in this context is that exposure via platforms enables the amplification of extreme and radical voices and the consequences that ensue when users and society at large are exposed to them. In particular, right-wing radical perspectives are often disproportionally visible compared to their actual size. This visibility on platforms also provides radical and fringe groups with a gateway to mass media coverage and opportunities to generate support and funding that would not otherwise be available. To counter these developments, major tech companies have reacted in different ways to limit the spread of radical content on their platforms. These efforts to “push back encroaching extreme (right-wing) platforms to the fringes of the ecosystem by denying them access to the infrastructural services needed to function online” are called deplatformization. The concept describes the process of denying access to a platform to voices unacceptable to the major tech companies.
Techniques of deplatformization
In general, there are two different ways in which the major tech companies can deplatform radical voices: removal and deamplification.
Removal
The first technique, removal, describes the suspension of specific accounts or deletion of specific pieces of content. A well-known example is when the social media accounts of then US President Donald Trump were blocked by multiple platforms of major tech companies at the beginning of 2021. While this was arguably the highest profile account historically ever removed by a platform, it was by far not the first. On all major platforms, thousands of accounts are deleted in regular purges around the globe because they have violated platform standards such as the prohibition against posting sexually explicit content, bot activity, or spreading disinformation. For example, on January 12, 2021, just four days after Twitter banned Trump, the platform claimed to have suspended over 70,000 accounts linked to the conspiracy theory QAnon. The rules regarding content that is not allowed on the platform are typically laid out in the terms of services by the platforms (ToS). In the case of a first-time violation, the suspension is often only temporary. The duration of the suspension depends on the gravity of the violation of the standards. However, in the case of a repeated violation of the rules or in very severe cases the suspension is permanent.
Users whose accounts or content were removed are informed about the decision. On several platforms, users can challenge these suspensions by submitting their case to a review system. For example, Facebook installed an “oversight board” that decides appeals. In their first report about their work, the Oversight Board stated that they received 524,000 cases in about nine months. Among the reasons for content-removal violations against the rules, hate speech was the most prominent (36%), followed by bullying and harassment (31%) and violence and incitement (13%). Most of the appeals (46%) came from the US or Canada, while only 4% were from the Middle East and North Africa and 2% from Central and South Asia. While it is not known how much content was removed in these global areas, it is quite likely that these areas are underrepresented in the appeals. This could be a consequence of lower monitoring rates in non-Western countries or the result of a lack of information about the appeal process in the global South.
In several countries, deplatforming accounts and content is legally required. In Germany, for instance, the Netzwerksdurchsetzungsgestz requires platforms to take down “manifestly unlawful” content within 24 hours, and other illegal content within 7 days. This includes “incitement to hatred,” “dissemination of depictions of violence,” “forming terrorist organizations” and “the use of symbols of unconstitutional organizations.” Until 2016, the platforms mainly focused their deplatformization efforts on removing only clearly illegal content, such as pirated or terrorist content. In the past few years, the attention has expanded to include additional types of content that is seen as toxic to the platform and to society at large. Major tech companies now also increasingly deplatform accounts or content to enforce their content standards, in particular by removing hate speech and online bullying.
Deplatformization, whether it is demanded by governments or a consequence of self-regulation by the platforms, has significant consequences for the human rights of freedom of speech or freedom of expression. The Norwegian Newspaper Aftenposten made this point clearly when in reporting that its post of the award-winning photograph of Napalm victims in Vietnam was blocked by Facebook. While the picture was later reinstated due to its high newsworthiness, the incident illustrates how debates about what should and should not be allowed on platforms remains, and continues to be complicated.
Deamplification
In order to minimize violations of basic rights, platforms currently also use a second approach to de-amplify voices and content that are considered borderline in terms of whether or not they breach platform standards. Instead of deleting accounts or posts, they limit the potential audience of content that is considered toxic but not illegal. For example, YouTube announced in 2019 that it intends to reduce recommendations of “borderline content or videos that could misinform users in harmful ways.” But this new measure only affects recommendations on which videos to watch next; the videos themselves remain on the platform and they still may appear in recommendations for channel subscribers and in search results. Reducing borderline content can be extended with a strategy of elevating “authoritative” voices such as news channels or public health information. In practice this could mean that users who have been exposed to borderline content are recommended primarily content from “authoritative sources” as a follow-up. To determine what content qualifies as “borderline,” major tech companies rely on a combination of signals sent by users (flagging), prioritizing reports by trusted flaggers such as fact-checkers and AI models trained to identify borderline content.
The advantage of this approach is that it addresses the distortion of the public sphere without directly limiting freedom of speech. The downside is that this process is a lot less transparent than deplatforming via regulations, because platform users cannot know how large their potential audience was. In essence this can mean that a user posts content that is seen by no one, and would not even know that there was no potential audience. It remains to be discussed whether reducing a potential audience to zero might be considered a limitation of free speech. At the moment, it is also unclear whether users could make use of the legal system or the court of appeals installed by the platform themselves to protest against this measure. Given the current lack of transparency around potential audiences on platforms owned by major tech companies, it would also be nearly impossible to provide evidence of deamplification.
The consequences of deplatformization
Users whose accounts were removed or deamplified, not only suffer from the lack of freedom of speech. If they own accounts with large audiences, they also loose an important stream of revenue. For example, Alex Jones, a well-known right-wing content creator who has promoted several conspiracy theories using his channel Infowars “attracted 2 million weekly listeners to his syndicated and streamed radio show, and his website, infowars.com, had 20 million monthly visits” in 2017-2018. This audience was largely built through his popularity on platforms owned by major tech companies, most notably YouTube. After YouTube deplatformed Jones in 2018, his website lost half of its visitors within three weeks. This example shows how important it is for content creators that rely on the platforms for revenues as well as all businesses that use platforms to promote their products and services to customers, to stay within the boundaries prescribed by the platforms in their terms of services.
Deplatformization also has major implications beyond the consequences for accounts that are removed or deamplified. It also affects the larger public debate which transpires over the platforms owned by major tech companies. The first point of discussion is whether deplatformization really is effective in its goal of limiting the reach of radical or toxic voices and content. The answer to this question is complex. On the one hand, it is clear that limiting the reach of over-amplified extreme content directly decreases distorted representations of public opinion on social media. Limiting such content makes it easier for social media users to form an unbiased opinion and get a sense of what the majority of the population thinks. It also limits the accessibility of radical voices online, at least on the platforms owned by major tech companies. Having said that, preliminary research into the consequences of the purges of Trump accounts in the beginning of 2021 showed that he continued to retain a very large platform on alternative social media platforms. Thus, one could argue that the issue simply migrated to a different arena. Similar dynamics were observable when far-right networks such as 4chan were deplatformed by major tech companies. As they were banned for example from Facebook and Twitter, their networks on other services, such as Telegram, grew explosively.
Second, we need to consider the impact of deplatformization on freedom of speech. It is important to keep in mind here how crucial freedom of expression is in democratic societies. It provides all citizens with the opportunity to inform themselves about politics and learn about political alternatives; it also provides them with a platform to make their own voices heard and become part of the political debate themselves. Most importantly, it is a crucial element of the checks and balances that hold those in power accountable. Freedom of speech allows a public discussion concerning the abuse of power, without fear of censorship. Under the protections of this basic human right, speech, including speech on social media, can only be restricted “pursuant to an order by an independent and impartial judicial authority, and in accordance with due process and standards of legality, necessity and legitimacy.” It is questionable whether this standard is met in the case of deplatformization, because platforms take the decision to either remove or de-rank content unilaterally and without the knowledge of the user. That means that in effect, the platforms become the arbiters of truth and harm. It can be argued that free speech does not apply to social media platforms. They are private companies that do not have the human rights obligations governments have towards their citizens. Yet, as the platforms owned by major tech companies have become instrumental in providing a stage for the public debate, many – including several of the major tech companies that own the platforms – argue that platforms share the responsibility to uphold human rights. But this introduces yet another danger, since leaving the decision of what can and cannot be said to the major tech companies essentially means a privatization of human rights enforcement. Going forward, one of the big debates around deplatformization is going to be how governments around the world interpret their obligation to ensure free speech and public information vis-à-vis the platforms. It may mean a need for more regulation of the platforms themselves, as broadcasters or newspapers have been regulated in the past.
Third, deplatformization demonstrates a shift in power towards the major tech companies. Having the power to decide which voices are too radical also means holding the power to decide what the acceptable mainstream is. In particular, in smaller countries in the global South, in which the platforms owned by major tech companies are the main gateway to information, this has major implications for the possibility of challenging the status-quo. In addition tech companies typically allocate resources to locally monitor content and actors from a cost-benefit perspective. This leads to different levels of actual enforcement of deplatformization globally, which can have severe consequences as the human right violations in the context of the genocide in Myanmar demonstrated. Finally, deplatformization demonstrates how the power that platforms wield goes beyond denying certain accounts or content an audience. It also excludes flagged organizations from platform infrastructures necessary to generate funding, organize online and offline, and communicate. Van Dijck, de Winkel and Schäfer go so far as to describe this process as “implied governance” of the wider public sphere. By setting and enforcing the rules of what can or cannot be said and holding the power to cut off those who do not comply from an infrastructure they need to exist, platforms essentially control the wider information ecosystem. The authors conclude that “the responsibility over the hygiene of our common online public space – an infrastructure that is used by billions of people across the globe – is daunting, and therefore, it cannot be left to a handful of corporations or a handful of nations.” It will be one of the major challenges going forward to develop an alternative approach to balance the need for communication online that is not toxic for society and the necessity to guarantee free speech.
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of IPPI and/or its partners.
Share this Post
AI-based Defense Systems – How to Design them Responsibly?
In order to protect their common heritage of culture, personal freedom and the rule of law in an…
Fighting disinformation and the question of origin
This commentary critically assesses current trends in online media control in liberal democracies. It argues that protecting society…
AI and Elections – Observations, Analyses and Prospects
Artificial intelligence (AI) is one of the key technologies of the 21st century and a backbone of the…