What Are Mis- and Disinformation?

Viral Networks is a six-part series investigating the current state of mis- and disinformation online with the scholars studying it from the front lines. In our first episode, Camille François to tells us how researchers identify mis- and disinformation, and Joan Donovan from the Harvard Shorenstein Center explains how platform design lets misinformation go viral, sometimes with deadly outcomes.

Guests: 

Joan Donovan, Camille Francois

Subscribe directly on Apple Podcasts or Spotify, or via our RSS feed. If you're enjoying the show, please be sure to leave the show a rating on your podcast provider of choice.

In our first episode, Camille François to tells us how researchers identify mis- and disinformation, and Joan Donovan from the Harvard Shorenstein Center explains how platform design lets misinformation go viral, sometimes with deadly outcomes. You can learn more about Camille's "Disinformation ABC" by reading her paper "Actors, Behaviors, Content: A Disinformation ABC."

Transcript

Introduction

You’re listening to Viral Networks, a look at the current state of mis- and disinformation online, with the scholars studying it from the front lines. We’re your hosts, Emily Boardman Ndulue and Fernando Bermejo.

Emily Boardman Ndulue:

Hi everybody, and welcome to Viral Networks, a podcast about mis- and disinformation.

Fernando Bermejo:

Today, it can feel like any major social issue is exacerbated by mis- and disinformation: elections, immigration, COVID-19, just to name a few.

Emily Boardman Ndulue:

Yes, that’s right. And often, when we encounter deceptive memes or hashtags promoting conspiracy theories, it seems like that content is being shared organically. But as soon as you start separating fact from fiction, mysteries about this content begin to unfold.

Fernando Bermejo:

Maybe you’ve heard the term fake news thrown around by politicians and pundits, but mis- and disinformation goes a lot deeper than that. Around 2015, the field of disinformation studies coalesced to understand the origination and spread of disinformation in the digital space. Scholars of human-computer interaction, psychology, political science, and sociology came together to study mis- and disinformation from a number of angles, and the field has only grown from there.

Emily Boardman Ndulue:

Here on Viral Networks, we’ll be taking the forensic approach to mis- and disinformation, talking to the researchers trying to solve the mystery of who is producing this material, how they’re making it go viral, and why they do it.

Fernando Bermejo:

If you want to learn about how this stuff gets made, how it moves around, and what we can do about it, you’re in the right place. Across six episodes, all about a half-hour long, we’ll be talking with a wide range of researchers, ranging from public health experts, to social science researchers, to doctors and folklorists. We conducted these interviews in Summer and Fall of 2021.

Emily Boardman Ndulue:

Today we’re going to start with some basics. We’ll be talking with Camille Francois who developed a useful framework for studying disinformation and Joan Donovan, one of the leading scholars on how political actors manipulate media to deceive.

Emily Boardman Ndulue:

Without further ado, we asked Camille exactly what is mis- and disinformation.

Camille François:

So dis- and misinformation are two different things, although of course they’re related. We call misinformation: the sharing of information that is false but that people are sharing without an intent to deceive without malice.

And so for instance, if I hear something about a vaccine that is actually false, and I am spreading it to my loved ones because I am concerned about it, and I want them to know, I am not trying to trick them. I am not trying to deceive them. I am simply sharing false information because you know maybe I believed in it. And so that’s what I called misinformation

It’s a big problem in, for instance, the medical field. Right, medical misinformation is the sharing of fake cures, you know which again people share without an intent to harm but in case it might be true or of course it’s a big issue with anything related to vaccine safety.

Now disinformation is different from misinformation because of the intent of the person who’s sharing it. So we call disinformation: information that is false but that is shared with an intent to deceive, an intent to trick your audience.

Emily Boardman Ndulue:

Why is this issue important? ... Why should sort of the average listener care about mis and disinformation?

Camille François:

I think that it’s important because it shapes our realities. Right, I think that it used to be easier to argue that what happens online, stays online. I think in 2021, that’s a difficult argument to make.

We know that what happens online shapes offline behaviors right. We know that that separation is artificial. We know that whether or not people make important decisions about their health, offline is informed about what they’re reading on the internet. We know that when hate movements happen online it can lead to offline insurrections. I think that’s one of the reasons why it really matters; it’s because all of this really shapes our society.

Emily Boardman Ndulue (voiceover):

In 2019, Camille developed one of her most valuable contributions to the field of disinformation studies — she calls it “The Disinformation ABC.” It’s a really helpful framework for understanding the different ways a researcher can investigate and analyze disinformation. It’s helped us structure our podcast too. A - B - C.

Fernando Bermejo (voiceover):

OK, let’s start with A, manipulative actors.

Camille François:

What this means is that sometimes you look at a piece of content on the internet and it doesn’t look like it's been amplified by bots or it doesn’t look like that the content in itself is particularly deceptive. And you wonder like, why is this disinformation, right? And the reason why this is disinformation can be that it has been crafted by sophisticated manipulative actors, so this can be a campaign coming from Russian actors, Iranian actors, Chinese actors, and really any of these sophisticated actors who use disinformation for geopolitical goals.

And if you look at one degree of granularity closer, you also realize when we say Russian actors engaging in state-sponsored disinformation, there’s actually a wide range of different types of Russian actors, whether for instance if it’s coming from a military intelligence, it’s a very different type of campaign than campaigns that are coming from troll farms like the Internet Research Agency.

There’s also, unfortunately, a thriving disinformation for-hire industry. Recently my team worked on a campaign in which there were a series of pro Huawei fake accounts who were commenting on Belgium regulatory proposals around 5G.

So when we see this, we also realize that corporate actors too have their own strategic objectives that they may or may not achieve using these types of methods; right, creating fake accounts to disguise their identity and then to comment on issues that are important for them.”

Fernando Bermejo (voiceover):

Next, Camille explained B stands for deceptive behaviors.

Camille François:

So this is about campaigns that don’t have content that are necessarily deceptive but are not necessarily put together by manipulative actors. But the way they’re spreading are using distortive behaviors. So it could be for instance they’re pushed by a bot army, or they’re gaining a set of systems on a platform so their impact is perceived to be much more than what they really have. Right, it looks like this is a large-scale movement of people endorsing an idea when really it’s just like a set of parameters that have been well-tuned and a set of behaviors that have been employed to make it look bigger than what it is or more impactful than what it is.

That’s for instance, famously bots. Right, if you see a campaign that is being artificially boosted on social media it doesn’t necessarily mean that someone particularly sophisticated and nefarious is behind it. And that doesn’t necessarily mean that the content in itself is particularly deceptive. But that is a distorted behavior and classically we do consider that to be part of disinformation.

Fernando Bermejo (voiceover):

And finally, C is harmful content.

Camille François:

And it’s interesting because today in 2021, we absolutely accept that there is a category of content that, on its face, constitutes disinformation and that the platform will take action for it. It wasn’t the case a few years ago and that’s sort of is the last category that the platforms engaged with I think when disinformation became a key topic. They were more focused on manipulative actors; right this is kind of where this big reckoning of Silicon Valley with disinformation came from in 2017 with the Russian investigation.

Emily Boardman Ndulue (voiceover):

Here, Camille was referring to an investigation by the US federal government into the possibility that Russia attempted to tamper with the 2016 presidential election through a range of techniques, including intentionally misleading political advertisements purchased on social media platforms like Facebook and Twitter.

But, as she noted to us, some companies were attempting to take action against mis- and disinformation on their platforms prior to the fallout from the 2016 election, particularly around the burgeoning anti-vaccine movement that was gaining steam on social media.

Camille François:

There had been specific platforms like Pinterest who had said, you know, there is a type of information about medical issues that if it is false and widely distributed, we will take action because we think it’s disinformation and it shouldn’t be there. So the first sort of large-scale trend on addressing this deceptive content was quite focused on medical issues and then of course, the electoral disinformation became a category of deceptive content on which we saw platforms take fairly aggressive action.

Fernando Bermejo:

As you go through the model, it talks a lot about international actors and kind of foreign interference. It’s mostly about geopolitics. But when it gets to the C piece, which is the one about content, the example that you use is the about Pinterest that talks about medical or health information. My impression was like yeah, it is obviously easier to tell what is susceptible or true when talking about health information than when it’s about politics because there’s more debate there. However, has COVID changed that?

Camille François:

I think it has changed that to the extent that a complicated nuanced scientific conversation suddenly became very mainstream very quickly. And so the types of discussions for instance on different testing protocols for different vaccines who used to be quite confined to scientific communities online are now absolutely mainstream. And the other main major factor in making this relationship more complicated is the emergence and growth and concertation of conspiratorial communities online which have played a very significant role in amplifying, creating, distributing both mis and disinformation on all these sensitive topics ranging from COVID-19 to electoral integrity.

Fernando Bermejo:

Joan Donovan, the research director at the Harvard Shorenstein Center, believes that spreading misinformation is something we do all the time, and have been doing probably forever. Sometimes we just get things wrong. But, what’s fascinating about Joan’s work, is that it focuses on how political actors exploit this simple communication issue at huge scales and high speeds on the Internet.

Joan Donovan:

So when we're researching, misinformation is just the biggest broadest bucket and it essentially means the sharing of false information or falsehoods. There's no communication without misinformation. Everybody makes speculative statements. “I think I know,” “Somebody told me,” “A lot of people are saying.”

Usually there has to be another criteria for us to count something as misinformation, which is it has to come from some kind of source where people are taking it relatively seriously. It's not just an individual.

There has to be some other phenomenon. Either it's coming from an institution or a government body or journalist, or it's coming from a lot of people at once. There's some kind of network effect. What is driving the phenomenon? Who is behind it? So for misinformation we start to understand it when we look at other criteria than if information is true or false.

When it comes to disinformation though, we are very much looking at intention. It's a very high bar to call something disinformation. So disinformation is the intentional spreading of false information for some kind of political or financial end. And so most of the stuff we study falls into the bucket of financially incentivized misinformation or disinformation, grifts, scams, hoaxes, that kind of stuff.

Media is the artifacts of communication. So media could be anything that's manipulated from a URL to phishing campaign through email to a headline that is clickbaity. Media manipulation just refers to this very broad phenomenon of manipulating media to achieve a certain outcome.

And the reason why we study media manipulation on my team is so that we have a larger category through which we can talk about disinformation and the tactics of disinformation, what kind of media is manipulated, as well as another way for us to discuss things that are like propaganda, but don't come from the state.

Is someone is trying to trick you and your job is to figure out how. Are they hiding who they are? Are they hiding what their intention is? Who is paying them ?

In some ways the field itself develops out of the technology because the technology isn't built for security and privacy. It's built for openness and scalability. And so it doesn't surprise me that you get these practices of media manipulation that require a sub genre of academic research to unpack and uncover. I view the field is something of a hybrid between cybersecurity and media studies, in a very strange ways. Of course people have always tried to hoax journalists. They wouldn’t call it PR if that wasn’t also a heist, they would call it what it is, which is media manipulation.

Fernando Bermejo (voiceover):

Joan told us she considers another factor in addition to actors, behavior, and content. We could give a D, for design.

Joan Donovan:

At the beginning of the pandemic, it was a torrent of hoaxes, scams, and grifts utilizing search engine optimization to try to get people's identities, to try to get their credit card numbers. And we saw all kinds of fake insurance, all kinds of fake cures, all kinds of fake masks and just lots of scam products flood our media ecosystem, especially flood Facebook in particular.

And through it, all these companies haven't really been able to stem the tide of misinformation because it's an artifact literally of the design of the systems itself. Which is to say that I think the field now is starting to understand that we're not just dealing with actors, behavior and content.

We're also dealing with the design of the systems themselves. And so you can't just say, all right, well, Facebook get rid of the scam on your system without also ensuring that Facebook has a prevention strategy in place that won’t allow this to keep happening.

The repetitive problem with media manipulation and disinformation actors, is that they know that any breaking news event can be leveraged to introduce fake information or false information into the world.

And it's actually really hard to start up a campaign from scratch and get people to pay attention to it. So we've seen over the last several years extreme weather events, mass shootings. These become places in which people will introduce false news or fake information as a way to draw people into some other set of concerns, right? They’ll use sock puppets, they'll use bots. A lot of the times what they’re trying to do is just give people enough information so they’ll click like and follow, and it’s preceding the field so that when they do want to launch a political campaign, they’re able to.

We saw accounts in 2016 that were posting stuff about the Kardashians for a while, just to gain followers and get attention so that when they wanted to, they could switch to political content.

The system's designed so that thousands of individuals that don't live together, that don't communicate with one another all see something online and are like, that's cool, like it, next thing.

Emily Boardman Ndulue (voiceover):

What Joan is talking about here is kind of terrifying to think about. Social media helps people communicate with strangers so quickly and at such a large scale that any crisis can be exploited at the drop of a hat. You might find yourself asking: where did we go wrong? Is there any good that can come out of a system that lets information basically move around unchecked?

Joan Donovan:

And so, as I think about what we know in this field, the things that I focused on is that people do make decisions based upon real-time information.

And nowhere did we see that more clearly during the pandemic. How do you get a society to start wearing masks? How do you get a society to stop going to work? We've witnessed some amazing, huge, broad social changes that have come about because everybody had to be connected to the same information.

And of course that doesn't come with uniform participation and people still resist. But at the same time, things changed and things changed en masse and they changed fairly quickly, which is testament to the fact that the information has immense value, and immense purpose, and immense power as a result. Which is why I don’t think we should leave it in the hand of a few billionaires to consolidate communication power.

But the pandemic has also taught us that if you tell people at 8:00 PM on Fox News that hydroxychloroquine might act as a prophylactic against COVID, that in seconds they'll be searching for hydroxychloroquine on Google. And within minutes, they'll be clicking on things that they think are related searches in order to try to buy supplements. And then downstream of that, you'll see a shortage at the pharmacy on hydroxychloroquine. You'll also see people buying tonic water because it contains chloroquine, which is a related search.

And then some people, at least two people, ate fish food, and one of them passed away, because they thought one of the chemicals in it was going to prevent them from getting COVID. And so people do take action based upon information. And as a result, we have to be really clear about what kind of advice, especially medical advice people are getting when they're searching for things like "COVID cure" or "COVID prevention".

Fernando Bermejo (voiceover):

But Joan did offer us some hope. There are ways to design social media to encourage community moderation of facts, which can foster a healthier discourse.

Joan Donovan:

The way in which Reddit has dealt with coronavirus information has been really exemplary in the sense that people when they post things on Reddit, have to tag it and categorize it so that there's a little bit more context to it. There's moderation on the message boards, so that things that are erroneous or problematic get removed fairly quickly.

And there's just a different set of care given to keeping the information environments high quality.

Emily Boardman Ndulue:

Joan, I'd like to spend some time with you trying to untangle political misinformation, medical misinformation. What are the spaces where the phenomena and the dynamics at play are the same? What are some key differences?

Joan Donovan:

Completely different, and they're completely different because the rationale for why one can be true and one can be false is completely different. Everything in politics is, to some degree, up to discretion. Whereas we have a scientific method that establishes facts. Facts are merely patterns that are reliable.

But what is being leveraged with medical misinformation is by and large scientific uncertainty. Which is to say that when facts have yet to be established or when facts are in controversy, that is what the wedge issue is that allows politicians, grifters, wannabe influencers to step in and say, “You’re being lied to.”

And this is really important that we understand that when millions of people are getting the wrong information en masse, they take action on it and it can actually turn really deadly.

Fernando Bermejo:

Joan just scratched the surface of the issues with mis- and disinformation that we’re facing as a global community during the COVID-19 pandemic.

Emily Boardman Ndulue:

That’s right. In our next episode, we’ll be diving into that exactly. How has our collective information disorder shaped the way we’re responding to COVID? We’ll hear from an ethnographer of conspiracy groups on Facebook, a historian of conspiracy theories, and a public health expert sharing what he’s seeing in his native Nigeria.

Fernando Bermejo:

Still to come on Viral Networks, a look at who actually creates disinformation content, talking with experts about psychological interventions to combat it, studying coordinated inauthentic behavior, and more.

Thank you for tuning in and we hope you join us for the rest.

Credits:

Viral Networks is a production of Media Ecosystems Analysis Group. We’re your hosts Emily Boardman Ndule and Fernando Bermejo. All episodes are produced and edited by Mike Sugarman. Julia Hong joined us as a script writer and provided additional research. Music on this show was composed by Nil and our producer Mike. Funding to produce this series was provided by the Bill and Melinda Gates Foundation. And last but certainly not least, we want to give a big thank you to all of the experts who joined us for interviews on this show.