"Deescalating polarization will contribute to diminishing the problem of misinformation"
Share this Post
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Nicole Krause, PhD candidate and Assistant Researcher in the Life Sciences Communication at the University of Wisconsin-Madison.
Tell us a bit about yourself, how did you become involved with disinformation?
I am currently a PhD candidate at the University of Wisconsin in Madison, in the Life Sciences Communication Department, and I’m a member of the Science, Media, and the Public (SciMEP) research group. I’m focused on issues pertaining to science communication, but we also work with political communication and the intersection between the two. I am also working as a Civic Science Fellow, looking at how to communicate more effectively with conservative, religious and rural audiences in the United States, where misinformation is a component. I started researching misinformation partly because it comes up a lot in science communication, especially in the context of the discussions about disbelief in anthropogenic climate change and campaigns that discredit climate science. With the pandemic, there has been a lot of influx of interest and research in this area, in particular in this intersection between science and politics and society.
What are your thoughts regarding where we are when it comes to understanding and tackling disinformation? How big is the problem nowadays, compared to 2016?
It all starts with the question, “How do you define what counts as misinformation?” My collaborators and I have written a few papers about this recently, one available as a preprint, and another that was recently published.
The difficulty in assessing the scope of possible misinformation problems is that so much depends on the state of the evidence base according to which you’re defining what counts as true or false. In the context of the pandemic, sometimes it is not easy to clearly define something as misinformation, since it’s an emerging crisis – the science is evolving. The ways we define and conceptualize misinformation matter for how we measure it, and measurements are in turn important, among other things, in order to assess the efficacy of our interventions. We produce a lot of research that suggests that there’s a massive infodemic. It becomes difficult to parse: Is a lot of misinformation actually going to be problematic? If people think the earth is flat, that poses no immediate threat to humanity. But we might care once a huge pile of misinformation claims that injecting humans with bleach might cure COVID. How we make these distinctions is crucial for accurately determining the scope of the problem and the severity of the threat.
If we’re still unable to define it, have we made any progress?
There are studies that define misinformation as something that contains potentially misleading scientific claims. For example, an experiment was carried out to see if a message regarding potential hazards of drinking raw milk would be processed differently by readers if presented in a context of higher or lower media literacy. Such experiments have yielded the useful finding that if you improve media literacy, people tend to process misinformation better. However, it’s hard to generalize this finding to all forms of misinformation. In the context of non-politicized issues like raw milk, there is evidence that certain interventions might assist in mitigating misperceptions. But this finding is potentially less useful if we allow for the possibility that not all forms of misinformation are created equal and that an intervention that works for one context will not necessarily work for another. There is some progress, but there’s nuance that must be considered.
There was a lot of research attention given to political disinformation, and now there seems to be a shift towards science communication. What research findings has the pandemic yielded?
The pandemic helps to highlight how difficult it is to find a “one size fits all” solution to mis- or disinformation. It has shown in the context of addressing misinformation that when there’s clear scientific data about something like climate science, with decades of research on the topic, or a 97% agreement rate among the scientific community about something, it is a very different game than when the evidence is changing and evolving. The evidence base during the pandemic is constantly shifting, making it particularly difficult to correct misinformation. Also, it’s a difficult time for scientists to try to communicate and not produce misinformation themselves. As the pandemic progresses, scientists and public health officials are in the tenuous position of having to make recommendations and communicate with members of the public about scientific knowledge about what they need to know about this virus. Politics just adds a lot of dynamics and confusion to the correction efforts, whereas if it’s less politicized, there is less difficulty in maintaining credibility as the science evolves. These factors influence how we respond to misinformation.
What are some of the insights that you have gained regarding both the nature of the problem and the potentials for intervention? Could you share some of them with us?
To break it down, there are three ways to talk about insights. 1) The ones that may not work. 2) The ones that might work. 3) The ones that work in one context, but can exacerbate problems in another, or with different people. The pandemic exposed this idea that given that science is a self-correcting system, the more members of the public who understand what scientific processes are and how knowledge is generated from science, the better the chances that scientists are not being perceived as misinformation producers. When talking about the problem of trust in science, there is often a tendency to locate the problem in people, in people’s minds. This can cause us to overlook structural problems. Yes, we can improve epistemic literacy among members of the public, but what can we do on the other end, where misinformation is produced? The supply of false and misleading claims poses a problem. Available retracted scientific studies can be spread as if current and single-study science-journalism articles highlighting a scientific claim very visibly as if it’s very much certain or true, can also be damaging. For example, a study suggesting that butter is bad for you might make nutrition science appear as if doesn’t know what it is talking about if, later, another study says the opposite. Another part of the problem that comes from the supply side is the decline in science journalism. More training, better science communication and more opportunities for people who are professionally trained to communicate is needed.
Is this a problem for science communication globally? If so, what are you seeing there?
Some of the things I’m describing are definitely US-specific. The epistemic literacy example is an education question, so the education system in a given country will potentially be implicated. The same would be true for the media system, where obviously, the United States has a different media system than many other nations who often have more hybridity with public funding and private funding. The United States has such a commercialized media system that this problem is particularly pronounced. The more science becomes politicized in the United States, the more problematic some of these things will become. Individuals tend to form opinions and beliefs that are consistent with those they held previously, in order to protect their identity. In the United States, political partisanship is a very strong component of identity (Republicans and Democrats, or just conservatives and liberals). We would like to think that in the context of science communication, it wouldn’t apply, as you want science to be a politically neutral communicator. We’ve seen the policy gridlock and problems that can result from the politicization surrounding science. With the emergence of social media, we have seen how misinformation can exacerbate polarization. In the past, it used to be more issue-specific, but what I find disturbing for the future of misinformation is the emerging politicization of science overall. Signs put up on peoples’ front lawns prior to the 2020 US presidential election, saying: “People are people. Love is love. Science is real!” are a good example of how conflating left-leaning, political ideas with the idea of believing in science could make science appear as a liberal thing. Such identity markers will make it more difficult to correct misinformation about science, as people will come to see scientists as the “other side,” (i.e. the political left). We have to do everything we can to minimize the association of science with any particular political ideology, whether that means appealing to the political elites, or asking people to refrain from putting up the kind of signs I just described in their front yards. Don’t play this game that science is on one side or the other.
An example of this coming from the elites, at least in the US, has been the movement by liberal political elites to say things like “Trump is anti-science and now, President Biden is going save science.” Maybe it works for the liberals, because it activates their political base, but I also think it comes with the cost of making science a partisan issue. Leaders from scientific institutions also should not feed into that. We don’t need the president of the American Medical Association (AMA) making statements that would suggest that Biden is the president we need to save science. The view that there’s a war going on between the political left and right, and that the fate of science is in the liberal heroes’ hands is not helpful.
One of the tasks to ameliorate the situation would be to communicate more effectively and meaningfully with groups that are alienated from science, such as conservative and religious audiences, to make it clear that science is not opposed to value systems and identities. One side of that coin is to think carefully about how science communications might be doing gratuitous offense to certain groups or drawing “us-versus-them” divides that are not helping. Recent work has shown, for example, that “war on science” messaging can harm scientists’ credibility among conservatives who perceive these messages as an attack on their identities, and that communications which make strong scientist-atheist associations can exacerbate perceptions that science as an institution is pushing a particular “moral agenda.” Any effort to deescalate some of the polarization will contribute to progress on the misinformation front.
Are there other examples besides the clashes between science communicators and religious beliefs in the US?
There are other identity groups in the United States that clash with scientific recommendations on different topics. The broad polarization or alignment of science with a particular political ideology is less obvious, but it’s emerging. The pandemic has shown how attitudes toward science and the willingness to hold beliefs that run counter to scientific consensus (misinformation and misbeliefs) can shift on a single issue. People are motivated to protect their identities depending on how the issue is developing, and where their group is positioned. As the discourse around the issue develops to make identity allegiances salient, people’s positions can shift. The challenge is thus broader than the presumed tensions between science and religion as such. There are other kinds of value systems and identities that can matter, too.
What can be done to address the politicization of science and the challenges that it poses?
It’s a multi-layered problem. There are structural realities, then you have algorithms underlying social media systems that are designed to push content based on what people are pre-disposed to believe. There are underlying economic incentives, too. It’s difficult to imagine why this would change as long as people keep using platforms, and as long as in the United States regulation is difficult or unlikely. It’s hard to assess the scope of the problem, because a lot of the data about how misinformation might be circulating in social media and to what extent the design of the algorithms is a contributing factor is proprietary. It’s Facebook’s data, and unless they decide to make a shift towards transparency, that area seems difficult to move in. Yet, it would be impactful, in order to better understand the role that social media is playing, to understand algorithms more clearly as well as their effects. We could also better understand how social media and legacy media, respectively, are implicated in possible misinformation problems, or how they reinforce each other. It’s not all just social media.
What is the way forward in your opinion? Who should take the lead?
I don’t believe in placing the onus primarily on individuals. People should make conscious and careful choices, but that’s like saying “We can address climate change by having everybody just start buying energy-efficient products.” This ends up erasing the fact that some of the bigger problems are at the policy or structural level. We can communicate to individuals about what to do differently, or ways to improve the education system in order to increase individual abilities and literacy in information environments. That’s all well and good, as long as we take care to not make individuals feel as if they’re the sole problem. And as long as our focus on arming individuals does not let us indulge in the belief that correcting individual misperceptions is a cure, or even our highest priority challenge.
I think that the most effective solutions will come top-down, from bigger, structural changes initiated by people in positions of power. That’s where individuals come in, since such changes perhaps aren’t going to happen without sufficient pressure from members of the public. I don’t want to come off as a pessimist, but it’s difficult to imagine this kind of structural change or related shifts in publics’ behaviors happening quickly. It is encouraging to see the increased attention given to the negative effects of social media. There is evidence that members of the public are becoming more aware of the potential harm posed, becoming more cautious users, and demanding more action on the part of government and politicians. I think solutions have to come from both directions. And yet, despite these signs, I don’t know if all the incentives are there, either for people or for institutions to make these changes.
Lastly, I would add the need for nuance about the differences between different countries. We’ve talked about how misinformation functions and the factors that contribute to its spread versus those that contribute effectively to correcting problems. The problems aren’t only about the type of the issue itself or how politicized is it, but also the cultural context and the political climate. These factors can matter a lot, so we should recognize them, even if it means that researchers, communication practitioners, and policy actors will have to do additional work to understand the right solutions for the right contexts, and to do somewhat niche cost-benefit calculations about how to intervene in misinformation, including whether we intervene at all. The silver lining is to recognize that if there’s research happening all over the world, maybe we’ll have insights so we don’t have to reinvent everything all the time, and we can gain something from each other’s work. It’s not going be “one size fits all,” but presumably, some of our problems are similar. Hopefully, more international collaboration can help.
Nicole M. Krause is a PhD student in the Life Sciences Communication department at the University of Wisconsin – Madison, where she is also a member of the Science, Media, and the Public (SCIMEP) research group. Nicole’s work focuses on how people make sense of (mis)information about scientific topics, with a particular emphasis on finding ways to facilitate productive interactions among polarized social groups on controversial topics like human gene editing or artificial intelligence. Nicole can be found on Twitter as @nicky_krause.
This Interview is published as part of the Media and Democracy in the Digital Age platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.
The opinions expressed in this text are solely that of the author/s and/or interviewee/s and do not necessarily reflect the views of the Heinrich Böll Foundation and/or of the Israel Public Policy Institute (IPPI).
Share this Post
Using Second Life Electric Vehicle Batteries to Store Renewable Energy
Repurposing Electric Vehicle Batteries as Storage for Renewable Energy Sources The transition from conventional vehicles to Electric Vehicles (EV)…
The Global Story of Election Interference
Authors: Olaf Boehnke and Carlo Zensus Ever since citizens have been communicating online with each other via emails,…
Data, Algorithms, and Ethics: Calculating instead of deciding
Data and Algorithms “Under the modern conditions of data processing, the free development of personality presupposes the protection…