Platform Forensics: How Misinformation Content Goes Viral

This week, we're putting all of the pieces together, looking at how influential groups from major politicla parties to well-coordinated extremists tailor mis- and disinformation content for specific platforms. We're joined by Scott Babwah Brennen who studies viral images, Kiran Garimella who studies Indian poltiics on WhatsApp, and Jeremy Blackburn who traces far-right American groups across shadowy platforms.

Guests: 

Scott Babwah Brennen, Kiran Garimella, Jeremy Blackburn

Subscribe directly on Apple Podcasts or Spotify, or via our RSS feed. If you're enjoying the show, please be sure to leave the show a rating on your podcast provider of choice.

This week, we're putting all of the pieces together, looking at how influential groups from major politicla parties to well-coordinated extremists tailor mis- and disinformation content for specific platforms. We're joined by Scott Babwah Brennen who studies viral images, Kiran Garimella who studies Indian poltiics on WhatsApp, and Jeremy Blackburn who traces far-right American groups across shadowy platforms.

Transcript

Jeremy Blackburn

You do see these cave-like, if you will—maybe dumpster fires is the better way to put it—fringe, extreme communities that absolutely do produce content and it evolves. We've seen it with memes in particular, how you can see the really racist stuff, you can see how it changes over time to become more palatable as it moves through the network.

Introduction

You’re listening to Viral Networks, a look at the current state of mis- and disinformation online, with the scholars studying it from the front lines. We’re your hosts, Emily Boardman Ndulue and Fernando Bermejo.

Fernando Bermejo

Hi everybody and welcome back. That was Jeremy Blackburn, giving us a glimpse of something we’ll be talking about a lot today: what sort of mis- and disinformation content goes viral.

Different communities use different platforms in different ways, and the people successfully coordinating mis- and disinformation campaigns are experts of those contexts. It’s why they understand how to make a real impact with just the right content.

Emily Boardman Ndulue

That’s really the focus of today’s episode. We’re not just asking what this content looks like, but how it’s tailored for and targeted at a variety of platforms. And there’s no one-size-fits all approach to making this content.

In addition to hearing more from Jermey, Scott Babweh Brennen will tell us about his work studying imagery associated with COVID misinformation and Kiran Garimella will walk us through a fascinating study he conducted on Indian political campaigns gaming WhatsApp.

Fernando Bermejo

Let’s hear from our first guest, Scott.

Scott Babwah Brennen

My name is Scott Babwah Brennen. I'm currently senior policy associate at the Center on Technology Policy at Duke University. And before that, I led research for the Oxford Martin program on misinformation science and media at the University of Oxford, which was a joint project between the Reuters Institute for the Study of Journalism and the Oxford Internet Institute and the Computational Propaganda team there.

Fernando Bermejo
Scott was at the Oxford Martin program when the COVID-19 pandemic first broke out. In April 2020 he teamed up with the journalism non-profit First Draft to analyze COVID-related content from a large collection of fact-checks that First Draft maintains.. What he and his team found, early on, was a mess.

Scott Babwah Brennen

And we saw quite unsurprisingly that just the amount of misinformation about COVID skyrocketed in the first three months of the year. Of course, in early January, a few people are actually talking about it, so it's not that surprising that there wouldn't be that much information. And we see just the dramatic increase through February and March.

And I think most notably here is 39% of the pieces of content included what we described as public authority action or claims about the actions of public authorities. And this is things like remembering back, claims that the Florida Department of Health was instituting a lockdown when they weren't or claims about things that the WHO or the UN were doing that were not true. And I think what's really interesting about that is nothing else, I think it points to the fact that people were really hungry in the early days of the pandemic for clear information about what public authorities were doing

Emily Boardman Ndulue

As the pandemic went on, Scott became more and more interested in something he would ultimately term “visual misinformation.” Much of the mis- and disinformation content that had previously been studied was text-based, such as dubious articles and headlines published by sketchy outlets or propaganda and rumors that go viral on social media. Scott, on the other hand, became interested in images that contain misinformation. Those could be memes forwarded on WhatsApp, videos shared on Facebook, falsely contextualized photos on Twitter – just to name a few.

Scott Babwah Brennen

What we're really interested in was the functions that the visuals were serving in these pieces of misinformation, that, again, had all been fact-checked by independent fact checkers. In the frames that we identified, one of the most prominent ones was what we call just intolerance. The way that images can pull out the underlying racism or xenophobia that we've seen in some of these claims.

So there's one of the examples that we used this is back in February, maybe of 2020, it may have been Indonesia where, supposedly a Chinese national that was roaming around the country intentionally spreading or spreading the virus. But the image that they use is this really washed out, it looks like maybe a passport photo or something, but just the way that they've done the colors on that makes it look like a mugshot almost

Scott Babwah Brennen

But I think that maybe the most interesting one was the way that visuals often directly serve as evidence for claims made in pieces of misinformation. There's one that was like this image of people burying bodies in a mass grave. But it's a still from Contagion, the movie. But of course when it was shared, it said, "Oh, look at all the bodies from the COVID victims."

And I think the other notable thing about that is all of the manipulated images that we saw, or I should say we actually looked at videos as well. So all the images of video were what Data and Society called cheap fakes, not deep fakes. They're all using really simple tools or techniques to modify images rather than deep fakes, using some complicated AI based system. One that really sticks out is this video that they had edited together, a couple different clips. And it started with a news broadcast from Australia talking about, I think it was ongoing research done in the Australian University about potential vaccine or cure or something. And then they cut in images of bananas to make it seem like if you just ate bananas, that would be the cure.

Emily Boardman Ndulue

During our interview, Scott stressed that this content always exists within a socio-political context. Maybe it resonates with some political ideology or set of biases. Maybe it reinforces a popular narrative. He argues that it’s impossible to make sense of how this mis- and disinformation functions without understanding that context.

Scott Babwah Brennen

I feel like it's getting less and less meaningful to talk about misinformation without talking about the political situation in this country and in other countries. I can't get a handle on the problem of misinformation about COVID for example, without recognizing what is going on in politics in this country and the way that an entire political party has chosen to embrace... Usually, I'm hesitant to say it's disinformation. To me, it's bullshit in the philosophical sense. To me, it seems more about the strategic use of communication, true or false, it doesn't matter, to achieve a particular end.

But I think the other big one is just identity more broadly. There's been some really great research recently that shows just how entangled misinformation in identity are, that we have to recognize the way that particular groups in particular moments use pieces of misinformation to do important identity work.

Like racism. I think we all learned a really important lesson looking at Russian-backed disinformation in 2016. The big finding of the couple reports that came out, I'm thinking of the Senate Select Committee on Intelligence commissioned a couple reports from, I think like Renée DiResta did one and then ComProp did one as well. And I think the big finding there was just how concerted an effort the IRA put into stoking existing racial animosity. And that's not an accident. Race is such an important fault line in America that misinformation often plays on it.

But actually in my class, the first day I assigned the famous chapter 16 from W.E.B. Dubois’s Black Reconstruction in America, which is all about this incredibly impactful disinformation campaign that happened in Reconstruction: when there's this effort to use race to fracture the emerging, new working class, to fracture the ability of the white working class and the freed people to coalesce into a shared identity. And it was that disinformation campaign, which certainly has continued to have incredibly impactful lasting influence.

So we certainly can't say that misinformation is a new problem, or deeply impactful misinformation isn't a huge problem.

Fernando Bermejo

As Scott alluded to, it’s not just the United States where misinformation problems are tied up with political power struggles.

Our next guest, Kiran Garimella, studies how mis- and disinformation is harnessed by politicians in Indian society, often in ways that are meant to make top-down propaganda efforts look like grassroots viral media.

Emily Boardman Ndulue

And how that content travels is culturally specific too. Kiran has focused his research on the encrypted messaging service WhatsApp, which is the dominant social media platform in places like India and Brazil. It’s owned by Meta, the same company that owns Facebook, but unlike Facebook, every interaction is private, either between two people like with a text message, or in larger, but still-private, group chats. This makes WhatsApp uniquely difficult to moderate or study.

Kiran Garimella

We were one of the first groups to do this type of research on WhatsApp. So at least back in the day there were, I mean, even now there's this argument that a lot of misinformation spills on to offline violence, or rioting, or killing people, lynching and things like that.

So there was this one period in India, like a six week period where over 30 people were killed because of this one rumor that started spreading about child kidnapping in India.

In the case of Western educated, industrialized democratic countries, at least when you look at misinformation, it's maybe mostly an issue of digital literacy and what kinds of things could happen. We're slowly realizing what kinds of externalities might happen because of these issues. But in India, it just started with people getting killed, right? It's not just in India, there's lots of countries like Nigeria it happened, in Mexico it happened. In Brazil elections were to a certain extent hijacked because of these sorts of issues.

Emily Boardman Ndulue

We'd like to talk to you a bit about platform and online space design, and how design contributes to people being exposed to misinformation and disinformation. What have you seen regarding that in WhatsApp? (35:23)

Kiran Garimella

So WhatsApp is a very minimalistic app, and that's one of its forte that it's really a simple app. It's not really hard for the first time internet user to use. But this forwarding feature is really deadly when it comes to the spread of wider information.

So in the initial days you could forward to over 200 people at once with a single click. Then they changed that to 20, they changed that now to five, and now I think it's one. So you can only forward to one person at a time, especially for information that is already forwarded quite a bit. So these sorts of like forwarding capabilities, which look very benign, basic for a social networking feature, can actually play a very big role in making things viral and having them spread really fast into different parts of the ecosystem.

Unlike other social network platforms, there's no feed, or there's no algorithmic curation, or there's no even friends suggestions or anything, you build your own network, you do your curation, you join groups that you want to be part of. (38:46)

So I think that's a very interesting use case to compare against other platforms like Facebook, Twitter, or Instagram, or any of these other, where there is that algorithmic loop where you make some choices and the algorithm make some choices for you, and then you are in that loop. But on WhatsApp, it's all your choice.

One of the popular, one of the specific features of WhatsApp, is this end-to-end encryption and the role that plays in informational spread or on the other end content moderation. So almost no content moderation exists on WhatsApp except if it's some child porn and things like that. They have some tools to detect and stop those.

But otherwise, most of the content moderation it doesn't exist, so there's no content moderation, there's no fact checking or taking down of content or things like that. it's a balance. It allows really private, secure communication around. On the other hand, it might allow things like rumors and these other things spreading unchecked. (40:50)

Emily Boardman Ndulue

According to Kiran, that end-to-end encryption has traditionally made studying WhatsApp pretty difficult, and therefore not a particularly popular platform in disinformation studies. Since nothing is public on WhatsApp and there’s no research API, the only way to see what’s happening on the platform is to actually look at it on a phone.  But 2 billion people use the service, Kiran knew he had to come up with some way of studying it at scale. So he bought a lot of phones. Like, a lot of phones.

Kiran Garimella

We actually built our own infrastructure, so what we did is we bought a dozen phones and all of these are cheap Android phones, less than $100. Each of them also is a dual SIM. So they had two SIM cards. So we had two dozen SIM cards and with each SIM card, we activated a WhatsApp account. Like I said, we had roughly 10,000 groups that we wanted to monitor. So what we did is we wrote scripts to automate the process of joining all these 10,000 groups.

Because WhatsApp is an end-to-end encrypted, the database is also stored on your phone. So it's all stored on the servers, but it's also stored on your phone. Because we own the phone, we can get that database, we can retrieve it and we can decrypt it and get the contents of the database. You can also download whatever images were sent or videos were sent. So this is a lot of engineering, but yeah, that's the process that we've been through.

One of the things that we found is that, we did a large scale annotation of thousands of images with the help of journalists. We found that roughly one in eight images that were shared in the data that we collected, again, there's millions of messages, 10,000 groups, or so were misinformation. So one in eight images that people saw were misinformation. So this is quite a bit of misinformation that people were exposed to. So that's one finding. The fact that there's lots of images being used and images being used as misinformation.

Emily Boardman Ndulue

And once he had a way of collecting data, he was then able to actually start understanding who was behind that misinformation.

Kiran Garimella

I mean one of the reasons why countries like India or Brazil and these countries, I mean, were tenable to the study of WhatsApp is because of the nature in which political parties have become invested in creating WhatsApp infrastructure. I mean, they've realized that everyone is now on WhatsApp and this is one way to reach these people digitally. So they really invested quite a bit to create these groups.

The idea is that you create these groups and you have information flow from the top. So someone creates a meme or some social media messaging, and then it flows down to the bottom, and then it just gets spread to these users. There were surveys that were done in both India and Brazil that show that roughly one in six users of WhatsApp were part of such groups created by political parties. So if you have, I don't know, 500 million Indians using WhatsApp, so one in six of that is roughly already close to 100 million, right? So that's a lot of people who are in these sorts of groups. So these groups are not some fringe phenomenon or something.

Fernando Bermejo

Kiran wanted to understand how political actors are using WhatsApp. He discovered and analyzed usage patterns by India’s majority party, the BJP – led by India’s current prime minister, Narendra Modi.

Kiran Garimella

What is being done is you make use of this WhatsApp infrastructure and you mobilize people by posting things like if you want to get us a certain hashtag trending on Twitter, what is done is you create a Google Doc that contains the hashtag and the list of examples about that hashtag, you post it on your thousands of WhatsApp groups, and encourage people, "Hey just tweet about this hashtag. You can copy paste some of these examples tweets, just change the content inside,” right?

Even if out of all these tens of thousands, of hundreds of thousands of groups that's already, like I said, millions and millions of people who are supporters, a few thousand, just copy paste at the same time that hashtag can get trending. It started roughly 75 such campaigns run through WhatsApp that had get trending on Twitter. We found that 92% of them, so the 69 of the 75 campaigns were trending on the day they were supposed to be. So this is a really well orchestrated campaign that is run by the BJP. This is something that we realized because we were part of some of these groups, luckily, but if you're not part of these groups you couldn't have observed that this sort of manipulation is happening. So we don't know what other things are being done. So we are just digging through the data and finding out the sorts of interesting things that are happening because on this closed platform like WhatsApp.

Emily Boardman Ndulue

The BJP operation that Kiran described to us is shockingly sophisticated, and it really hits on that entire Disinformation ABC we’ve been working through. It’s not really helpful for Kiran to isolate the problem into actors or behaviors or content – instead, studying WhatsApp, he’s seeing how all of these work together in concert.

Fernando Bermejo

And we can Zoom out a little further too. That piece about the BJP using a Google Doc to coordinate viral campaigns at WhatsApp is actually just one example of how particularly sophisticated actors in this space use multiple platforms at the same time. To better understand this final piece of the puzzle, we talked to Jeremy Blackburn, who specifically researches the way that disinformation travels across platforms, not just coordinated by state actors like the BJP, but political extremists too.

Jeremy Blackburn

I'm Jeremy Blackburn, and I'm an assistant professor in the Department of Computer Science at Binghamton University. My PhD work was in bad behavior online, so things like cheating, and toxicity, and cyberbullying in online games. And then 2016 happened and the same general sphere of bad behavior and weird corner case stuff immediately led us into this, basically.

So obviously, there's all sorts of problems online. They're all over the place. Mis- and disinformation is a very big chunk of it, but I think that in the broader scope of things, what we're seeing now is the realization of a lot of socio-technical problems that people have warned us about and done some type of preliminary work, that type of thing. But then we've now reached that point where socio-technical problems, the entire world is online to some extent, so now those socio-technical problems are up front.

The issue now is that, again, we've reached that convergence point where it's affecting society at large. So unless human beings all of a sudden stop falling for BS, and propaganda, and this type of stuff, it's not going away.

Emily Boardman Ndulue

You've done work in toxic content, or hate speech, as you just said bad behavior, as well as in mis and disinformation. Can you help us understand what you see as the relationship between those categories conceptually?

Jeremy Blackburn

If you want to look into conspiracy theories, which is a lot of these socio-technical problems that we're seeing right now, that is, essentially, mis- and disinformation at its heart. And if you look at a lot of the online hate speech and online extremism, you'll see things like discredited studies, or old theories of phrenology and stuff like that kind of thing show up. So that, again, is mis- and disinformation, where people are using information, research, whatever, and skewing it into their own point of view.

So a lot of the disinformation, in particular, at least that's what we saw in the 2016 elections and more recently, was designed to sow discord more than anything, so to cause problems and rile people up. So you had the Internet Research Agency that would try to do things like try and organize a right-wing rally and maybe Black Lives Matter's across the street. So that's how it ties in. Where the web is kind of a toxic place and mis- and disinformation, they breed from that and they take advantage of it, I guess.

Fernando Bermejo

One of the key themes in Jeremy’s work is that uncertainty – whether that’s in politics, public health, or anything else – opens the door for information to be manipulated. The people who understand how this content moves around the Internet and goes viral can be shockingly influential. This is true of the Russian IRA and it’s true of the online far-right movements that spawn popular conspiracy theories too.

Jeremy Blackburn

It may start on an ultra extreme, like 8chan. That's pretty far out into as extreme as you can get, but then it'll pass through a couple of filters. Maybe it goes on The_Donald, they have a new site—Reddit's old The_Donald got banned—and they'll kind of filter out, maybe, some of the too overt stuff. And then, maybe, it'll show up on Twitter, or mainstream Reddit, or the rest of Reddit, this type of stuff.

QAnon is a perfect example. So QAnon started on 8chan, 4chan, let's say, as Pizzagate. And this was when the DNC was hacked and everything released on Wikileaks, and during the 2016 election. And these people, the users, the community went through it all and built this big conspiracy that the Democratic Party was molesting children and stuff like that. Then, it turned into QAnon and over time, it got more and more weird, and it graduated to bigger platforms. Well, Q, him or herself, themself, was posting on 8chan. You still saw it show up on, obviously, Twitter, or Parler, Gab, Reddit, everything. Facebook, everything.

So I think that's the canonical example of where this stuff is super weird, but then it gets filtered and iterated on by these fringe communities, and then at some point, it bursts. It makes it way out into the mainstream when it's become palatable enough, or whatever.

And I think it actually makes a lot of sense, because you do see these cave-like, if you will, maybe dumpster fires is the better way to put it, fringe, extreme communities that absolutely do produce content and it evolves. We've seen it with memes in particular, how you can see the really racist stuff, you can see how it changes over time to become more palatable as it moves through the network.

Fernando Bermejo

Identifying key figures in conspiracy theory communities like these is a crucial element of Jeremy’s work. It helps him understand where the content he’s studying comes from, and therefore who is likely to be a source of that disinformation.

Jeremy Blackburn

We've done some work in trying to detect these accounts, at least on Reddit. … When we're looking for ground truth, we just go back through their posting history, we look at the places that they've posted, and we have to qualitatively assess it – whether or not they were posting at the same time as other accounts that we either maybe do know, or are under suspicion, this type of thing.

If you're looking at a platform that has a declared social network, you can look at how information flows between people that are connected, and again, you can say, "Hey, this person is some kind of central disinformation figure, so if stuff passes through them, that can be an indicator of it being disinformation."

Fernando Bermejo

Once Jeremy has identified specific content spread by specific actors, he can start to understand how it jumps across platforms.

Jeremy Blackburn

Any time that we want to compare these different platforms, we have to, in some way, shape, or form, take into account the differences in the platforms and also latent connections. Because on Twitter, we have explicit connections. On Reddit, we can kind of come up with some connections, like if two people reply to their comments or post in the same sub-Reddit. That can kind of give us a connection, but we're not really ever going to have much of a connection of, "Hey, this person saw this on Twitter and then went to Reddit to post about it." Again, we can say, "Oh, there's links to Reddit on Twitter," but it's still a latent relationship that's really hard to uncover.

And I think that's really where we're lacking, is that we still don't know the path. We're not able to properly articulate or demonstrate the exact path that some of this stuff is moving. We can only take guesses and we don't know how many steps are being missed along the way. And that's why I think it's really important to look at stuff multi-platform, because until we start looking more, and more, and more multi-platform, we'll never be able to trace how stuff is moving. If all we do is stay on Twitter, then great. Maybe Twitter can be improved, but the rest of the web will be messed up.

Emily Boardman Ndulue

And Jeremy made sure to point out that mis- and disinformation are far from the only issue that arises when radicalized individuals and communities move between platforms.

Jeremy Blackburn

I think a good example of why we need to look at multi-platform stuff has to do with a lot of content moderation that has been done. If you want to be real explicit about it, Gab, where the Tree of Life shooter came from; Parler, where the January 6th stuff was organized, or at least a lot of people on there. Those were, in large part, populated by people that were either banned from Twitter or were leaving Twitter because other people were banned, this type of thing. And if you were studying Twitter, maybe all that type of content disappeared, and in isolation, maybe it looks like Twitter got better. But obviously, we know. It didn't happen. These people went to other platforms that became more radicalized.

Fernando Bermejo

Jeremy is issuing the important reminder here that mis- and disinformation are not just online problems. What happens on the Internet has a complex and sometimes very direct relationship with what’s happening offline.

Emily Boardman Ndulue

And as Kiran pointed out, understanding how misinformation works online could offer a path to understanding how to address those broader social problems. Today’s episode ends on a bleak note, but for our next and final episode, we’ll be talking to some researchers who are offering solutions to the problems we’ve outlined throughout this podcast. Thank you as always for joining us on Viral Networks, and we’re excited to have you back next time for our series finale.

Credits

Viral Networks is a production of Media Ecosystems Analysis Group. We’re your hosts Emily Boardman Ndule and Fernando Bermejo. All episodes are produced and edited by Mike Sugarman. Julia Hong joined us as a script writer and provided additional research. Music on this show was composed by Nil and our producer Mike. Funding to produce this series was provided by the Bill and Melinda Gates Foundation. And last but certainly not least, we want to give a big thank you to all of the experts who joined us for interviews on this show.