"Disinformation is not just any piece of “fake news” – It's the deliberate dissemination of false or misleading information"
Share this Post
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Prof. Dr. Ulrike Klinger, Professor for Digital Democracy at the European New School for Digital Studies at European University Viadrina in Frankfurt (Oder) and Associated Researcher at the Weizenbaum Institute for the Networked Society in Berlin.
Tell us a bit about yourself, how did you become interested in disinformation?
Ulrike Klinger I am a scholar in political communication, a field which lies at the intersection of political science and communication science, and I am a professor of digital democracy. My research focuses on election campaigns and other forms of social mobilization on social media and in the broader context of the public sphere and digital public spheres. What I understand as disinformation is strategic communication – it is not just any piece of presumably “fake news” that is circulating. It is the purposeful, deliberate, strategic, dissemination of falsehood or false information or misleading information. Since elections and various forms of social mobilization require strategic communication, this is the context in which I encounter disinformation.
How widespread is this problem? We have been talking about this for years. Has the severity of the problem decreased or increased?
Disinformation is nothing new, but it is certainly a bigger problem than it was in the past. This is not just because of the Internet, but because of three developments that emerged simultaneously. The first is the rise of social media platforms over the past decade, an environment where everyone can publish everything. They are a place where democracy flourishes, but also where propaganda and disinformation are abundant. We have seen that platforms have become a main source of information for quite a large part of the population across the globe. The Internet is thus not some minor fringe space where only the younger generations hang out. The second development is that the rise of platforms challenges traditional media. Professional journalism has been struggling in the past years, especially since advertising money has been streaming to platforms. It is a key question, how to find ways to generate the financial support necessary for professional quality journalism to survive in the age of platforms. The third development is the general weakening of democratic institutions in Western societies, from political parties to the church and civic associations. The accumulated effects of these three developments cause disinformation to be a bigger problem than it used to be.
What are some differences between what is happening in the US and Europe, and specifically in Germany? Can disinformation be addressed as a single phenomenon?
The same technology plays out differently across societies and populations. European media systems, which tend to have robust public service media and journalism, are different from their American counterparts. While there is great journalism in the US in some parts, “news deserts” are growing across the country. In Germany, too, there are different levels of education and media literacy across different segments of the population, but overall, news and high-quality journalism are rather available for the public, more so than in the US. However, disinformation is a huge challenge both in Europe and the US, and for all other democracies as well.
Many identify 2016 as a turning point in the conversation about disinformation. Now, some years later, have we achieved anything? Are we any closer to finding the right approach to dealing with this challenge?
The discourse has turned quite gloomy. I think it is because it was way too optimistic and naïve in the beginning. This is something that always happens when new technologies arrive. There is the inflated techno-optimism on the one hand and the exaggerated techno-pessimism on the other. However, both forms imply some form of techno-determinism. I view that as a crucial misconception. When we talk about digital society, we need to talk more about society, and not just about the “digital”, i.e. about technology in an isolated way. For instance, think about the optimism ten years ago with the Arab Spring and the “Facebook revolutions”, and then think about 2016 when we realized that platforms are not only democracy disseminating machines, but can also be used to spread propaganda and lies. I think that these events helped in demonstrating the complex intersection of technology and societal dynamics. Then, the pandemic came along and added yet another dimension to the equation.
How has the pandemic influenced the debate?
The pandemic has made it clearly visible that something very problematic is going on. It has shown in an acute way what disinformation can do: Disinformation about a virus, about vaccines, and about potential cures, can actually kill people. We have seen the scale of this problem, how a very small number of people can spread disinformation to a very large population. The pandemic also activated platforms in a way that did not happen before, spurring them to increase their efforts at content moderation and enforcing platform rules. I think this happened during the pandemic because it was easier to identify disinformation about COVID-19 than about an election. For instance, we have hard facts about the virus, and how the vaccines work and it is fairly easy to fact-check. It is much more difficult to fact-check misleading information in an election campaign, when someone uses an old photo, an old video or some snippet to make the impression as if it were connected to things that are happening today.
Has the pandemic had an effect on the understanding of political disinformation and how we address and research it?
It’s too early to say. In my perception, two main misconceptions about disinformation have become clearer over the past two years. One is that disinformation is not the result of large bots, or troll armies, from Russia or other countries, messing up our otherwise rational and coherent societal discourse. Rather, disinformation often flows top-down. If high-ranking politicians or celebrities disseminate disinformation, you do not need a Russian troll army to do the job. The other misconception is that there will be a technological solution to it. I have been very critical of this position. I think we have seen how content moderation by platforms and fact-checking have been doing a really great job in recent months, but still could not solve the problem. By now, we should have better understood that we will never be able to regulate disinformation with laws, content moderation by platforms, or fact-checking. We will not be able to regulate it away. I think the most important question to ask should not be “How can we get rid of disinformation?” because we will not. Rather, the question should be: “How can democracy survive and be fit to deal with this?” Democracy is quite resilient; it has dealt with problems before. What are the crucial points that we have to identify to get through this and live with disinformation?
These questions relate directly to your research and field of expertise. Can you share some insights?
I think it needs to be an effort that involves all of society. In recent years, we have pretty much done nothing to regulate platforms. It is not just a platform problem, but we have allowed them to grow into huge billion-dollar monopolies. They should be able to continue to operate, naturally, but in a way that is less harmful to society and democracy. Regulators have delegated problems to the platforms and put them in charge: “We want you to delete content that we do not like as a society.” Citizens tend to outsource things to politicians, demanding that they do something about it, but actually, it is an issue that everyone in society has to get involved with: The platforms, politicians, civil society, each and every one of us. We all contribute to the dissemination of disinformation every day by clicking on and sharing posts that we have not read. For news and information, we should not rely on free content delivered via platforms algorithms. So much low-quality content is out there for free, while high-quality content remains locked away behind paywalls. We need to find new business models for quality journalism, because it is more important than it has ever been. It may not be perfect, but it is at least an institution that provides epistemic editing in a world of information chaos. Journalists have professional, transparent methods for telling facts from fakes, finding sources, and fact-checking their stories before publication. Most importantly, they can be held accountable if they publish something that is obviously wrong. The more citizens use social media for information and for debate, the more important it is to have professional-quality journalism. Everyone can support that by subscribing to a newspaper or paying some money for the information. We have gotten very used to just getting information for free, and research shows that this has negative consequences for citizen’s political literacy.
What needs to happen for politicians and for platforms to take significant steps towards addressing the problem of disinformation?
They have started to act, but political change requires political pressure. This is again where we, as citizens and civil society, come in. Politicians in many democracies are acutely aware of the problems that have arisen in digital public spheres. There is political will to address this, but it is incredibly difficult to do so, not least because of the freedom of expression, a foundation of democracy itself. It’s next to impossible to address disinformation on the level of law and regulation alone, because in a democracy you do not want to censor platforms. We should not transform platforms into “truth institutions” or introduce “truth algorithms”. It is an issue that needs to be addressed on many political levels, for instance, by reforming party regulations and election laws on how to sustain fair democratic elections in a disinformation environment. There are examples of civil society inviting parties to implement a code of ethics during election campaigns, e.g. in the Netherlands and Germany in 2021. That’s a low-threshold way to address this problem. I find it important to understand that disinformation is not a problem of technology, and therefore it cannot be solved by technology. Developing “truth algorithms” to filter out presumable disinformation from social media, or automatically flagging, shadow-banning and deleting: This is not the direction we should be going as a democratic society.
If technology is not the solution, and the legal approach is problematic as well, what other options are left in practical terms?
I think there is no “one” solution, but rather a whole bundle of different things that we can try to do. It’s complicated and it’s going to be messy. Unfortunately, we can’t outsource that. Fighting disinformation is an uphill battle that requires a whole of society approach, meaning that ongoing civic engagement is crucial. There is no “hands off” solution to the problem, if this is what you are implying. Both technology and disinformation continue to evolve and the deliberation process between all relevant stakeholders needs to be seen as an ongoing process, not a one-time solution. I hope that, as a society, we have grasped the nature of the problem by now. I believe that the romantic notion that portrayed the large internet platforms as “enablers of democracy” just a decade ago, has been largely replaced by a more sober and balanced view of what technology can do to, and for our societies. It is our generation’s task to step up and make this digital information environment beneficial, or at least less harmful to society and to democracy.
Prof. Dr. Ulrike Klinger is Professor for Digital Democracy at the European New School for Digital Studies at European University Viadrina in Frankfurt (Oder) and Associated Researcher at the Weizenbaum Institute for the Networked Society in Berlin. Her research focuses on political communication, the transformation of digital public spheres, local communication and digital technologies such as algorithms or social bots.
From 2018 to 2020 she was Professor for Digital Communication at Freie Universität Berlin and head of the research group “News, Campaigns and the Rationality of Public Discourse“ at the Weizenbaum Institute for the Networked Society in Berlin. Ulrike Klinger completed her doctorate in Political Science at Goethe University Frankfurt am Main in 2010. From 2009 to 2018 she was a postdoctoral researcher at the Institute for Communication Science and Media Research at the University of Zurich, a visting researcher at Alexander von Humboldt Institute for Internet and Society HIIG in Berlin (2013) and the Center for Information Technology and Society CITS at the University of California in Santa Barbara (2017), and visiting professor for digital communication at Zeppelin University Friedrichshafen (WS 2016/2017).
More about Ulrike Klinger here: ulrikeklinger.de
The opinions expressed in this text are solely that of the author/s and/or interviewees and do not necessarily reflect the views of IPPI and/or its partners.
Share this Post
Human Cognition and Online Behavior During the First Social Media Pandemic
Accelerated Information Consumption on Social Media Throughout the COVID-19 Pandemic The COVID-19 pandemic is the first pandemic that…
The Global Story of Election Interference
Authors: Olaf Boehnke and Carlo Zensus Ever since citizens have been communicating online with each other via emails,…
Implementing micro-depots in last mile logistics: opportunities and challenges
Courier, express and parcel (CEP) service providers play a crucial role in supplying cities with necessary goods, for…