"Policymakers need to empower people to make good decisions online"
Share this Post
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Dr. Philipp Lorenz-Spreen, a network scientist at the Center for Adaptive Rationality of the Max Planck Institute for Human Development in Berlin.
Philipp, can you tell us a bit about what you do and how disinformation become part of your work?
My work generally deals with the study of decision-making processes, and the aspect of disinformation that fascinates me the most is how it can spread in a self-organized way through social media, which is very different from disinformation spreading by word of mouth previously. We might view disinformation as a virus that is spreading through our social network. Information is not a virus, of course, but there are some similarities. What I find most interesting is that any person can be part of the transmission chain, click the button, or not click it, and contribute to increasing or checking the spread of disinformation.
How acute is this problem, in you opinion? Should we be worried?
The scale of the problem is very difficult to estimate, but some researchers are attempting to assess the situation. A recent study, for example, found that people only rarely share complete falsehoods. I prefer to think of the problem as not just about fabricated information, but also about messages that might even be true at their core but are presented in a way meant to polarize or to manipulate. This problem shouldn’t be underestimated, because its effects are irreversible. Once the harm is done and our communication systems, and potentially our political system, have been, for example, influenced by false information, it will be difficult to come back to the original state. We have to do something about this now. It’s our responsibility to think about the information that people consume and how it circulates and is modified in our public sphere. Information consumption shifted online, and we should not leave this problem to be dealt with by tech companies alone.
As a researcher, do you see your role as trying to understand the problem, or trying to solve it?
I often ask myself this question. As a researcher my job is mainly to understand this complex system. Nevertheless, scientists should not confine themselves to mere observation and analysis. They can and should share their insights to actively promote change for the public good. This is an important aspect of the role of scientists, but it is a two-step process: First, we need to understand what is really going on, and in the case of online disinformation, we are still quite far from that point, mainly due to very limited access to data, unfortunately. Then, as a second step, we need to come up with counter-interventions. In the offline world, if something is harmful, we can decide to stop doing whatever it is that is causing damage, and restore the former status quo. But we cannot do that online – we cannot switch off social media. So we need to develop models for how social media can serve society. The developers of such models can be civil society organizations or public policy professionals, but I think this, too, is a job for researchers. We can compare this enterprise with the development of vaccines, where researchers not only identify what is wrong, but also develop a solution.
We may not have a solution just yet, but are we closer to understanding the problem?
Yes, I think so. There is already a huge amount of research underway, even though the large-scale impact of social media dates back only about 10 years, which is not a lot in the world of research. Plus, we are trying to understand the scope of a problem, while the access to the information we need is controlled by those creating the problem in the first place. It’s an unusual situation. Still, even though we’re behind, we can already see some patterns. We see that while the early hopes that the internet would democratize everything did not materialize, neither did the extreme fears that everyone on the internet would find themselves in a completely separate bubble of information. We are converging towards a more constructive middle ground.
Has the pandemic changed anything in the research of disinformation or in the pursuit of solutions?
The pandemic created a situation of great uncertainty where a basis for discussion was lacking, since people were exposed to divergent ‘facts.’ In addition, people spent more time online because they were constantly home, and they were feeling isolated and wanted to connect and share information, which was not always helpful, and even amplified the problem. There have been already many papers published about the information situation during the pandemic including the influence of psychological factors. I wouldn’t say that the direction of research has changed, but I do think health information, and more generally, science communication, will have more weight in the future, since there is a need for quality information.
What, then, is currently the most important question in the research of disinformation?
The call for solutions is getting louder. I don’t think we’ve settled the discussion on just how vast the problem is. That stated, a big enough portion of the public and research community is convinced that there is a problem and we need to do something about it. That’s why the focus is now increasingly turning to the question of “what to do.”
Who should be responsible for developing the solution? We already mentioned research, but if we include other stakeholders, such as lawmakers, NGO and tech companies, how do you imagine the division of labor?
I would imagine that the actual solutions – ‘the vaccines,’ so to speak – will probably be developed by researchers carrying out experiments that are transparent, so people will be able see how they are tested. But implementing these solutions is a whole different question. For this we need think tanks and other institutions, like IPPI, to communicate with researchers, aggregate their findings and help present them to policymakers. This is the pipeline I imagine, but there are also alternatives.
In theory, we could take a competitive strategy: If there is sufficient public awareness and preference for quality information including standards for how a platform should look and work, the public will prefer the platform which prioritizes these elements. But this is possible only if there’s true competition between platforms, and currently, big platforms hold monopoly. Another path could be establishing public platforms, run as non-profits on public funds, and that don’t rely on advertisements – but this has its own challenges.
I could imagine some of these developments going hand-in-hand. First, regulation could curb the power of the existing for-profit platforms and limit the spread of disinformation through these channels. At the same time, we could employ researchers to develop and evaluate public platforms, as potential alternatives. And who knows, perhaps someday these new platforms will become big enough to compete with Twitter and Facebook and maybe even force those actors to change their behavior.
So would you advocate for a bottom-up approach, which begins with raising awareness and educating the public about disinformation, or a top-down approach that focuses on regulation?
I’d say that regulation needs to address movement from the bottom-up, and that’s exactly the crux. Normally the law is very top-down, but since we are dealing with a problem that is generated from the bottom-up, where many individuals are contributing in numerous small ways to create a large problem, and the infrastructure they’re using is incentivizing this sort of behavior, I think the solution needs to begin on that level as well. I admit, I’m not 100% sure how a law could do that, but regulators need to understand that they themselves are not the solution and instead facilitate bottom-up solutions, by making laws that empower people to make good decisions in the online environment.
Dr. Philipp Lorenz-Spreen is a network scientist at the Center for Adaptive Rationality of the Max Planck Institute for Human Development, Berlin. His research focuses on empowering democratic societal discourse through improved online environments as well as on ways to combine micro and macro perspectives on collective decision-making to improve self-organized online discourse.
This Interview is published as part of the Media and Democracy in the Digital Age platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.
The opinions expressed in this text are solely that of the author/s and/or interviewee/s and do not necessarily reflect the views of the Heinrich Böll Foundation and/or of the Israel Public Policy Institute (IPPI).
Share this Post
Why Germany should practice the cyber norms it preaches: “The Case of a Vulnerabilities Equities Process”
The year 2021 has seen new momentum in the global debate about cyber norms, that is, rules for…
"We work to make the political campaigning environment more trustworthy, transparent and comprehensible for people."
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks…
Germany’s Troubled Trajectory with Mass Surveillance and the European Search for Safeguards
The Search for Oversight, Safeguards and Accountability in Tech Surveillance The landscape of digital communications and the evolution…