Josephine Lukito MUG

Josephine Lukito, PhD student at UW-Madison

It’s unclear just how much and how successfully Russia used social media to interfere in the 2016 U.S. presidential election.

But there’s one indisputable fact: Russian Twitter accounts posing as Americans successfully fooled a lot of actual Americans. According to recent studies, they fooled a of reputable news organizations, too.

In November, an analysis at Recode found organizations like The Washington Post and The Miami Herald unknowingly included tweets from Russian accounts associated with the Kremlin-linked Internet Research Agency in their stories.

Expanding on that study, researchers at the University of Wisconsin-Madison examined 33 major new organizations and found that 32 of them had at least one story featuring a tweet from a prominent IRA Twitter account, for a total of 116 stories across outlets like NPR, USA Today and the Los Angeles Times, but more often appearing in partisan outlets like the Huffington Post or The Daily Caller.

The Twitter Exploit: How Russian Propaganda Infiltrated U.S. News” study thankfully didn’t find “fake news” very often, as only 5 percent of the tweets used by the media organizations contained false information.

Instead, the tweets were often used as examples of opinions on divisive social issues, like sanctuary cities. Partisan outlets used the Russian Twitter accounts posing as hyper-conservative and hyper-liberal individuals to draw disgust or give approval.

Josephine Lukito, a PhD student in the UW School of Journalism and Mass Communications, and Chris Wells, associate professor at SJMC, are the lead authors of the study. The Cap Times sat down with Lukito to break down what they found and think about how journalists can prevent this from happening in the future.

What was surprising about the study results?

A lot of the news stories talking about Russian influence have really focused on the 2016 election, but what we had found was a lot of the stories that used these tweets actually had appeared right after the election, early in 2017 and then kind of moving on into March 2017. This was kind of a reminder that Russian attempts to influence the American public don’t just happen during the election, they don't go away, and we have to stay vigilant.

These tweets were appearing in stories that were about politically charged issues, like sanctuary cities, race-relation stories and stories about people’s opinions of healthcare. So these were all very politically charged topics that exist throughout the year, and they don't just crop up during an election season.

But a second point that I think wasn’t as surprising, but was super important for us, was that these tweets were appearing as American opinions, and not necessarily as pieces of information. It’s not that they were generating fake news that was getting into news media, and that’s very comforting to me. But whenever news organization wanted to take that vox populi, public opinion, or when they try to do the man-on-the-street interview … they would use tweets and would say, "This is an example of an American being outraged or an American having this political opinion," when in reality that “American” is a Russian account or a Russian person.

Your study talks about how these tweets in the news weren’t propagating “fake news” as much as amplifying controversy.

Conservative (outlets) were using conservative Russian tweets to support their opinion and then kind of using liberal/Russian tweets to say, “Look at how silly liberals are,” and vice-versa. Liberal news organizations were very much using conservative Russian tweets to say, “Look at how silly the conservatives are.”

I’ll highlight one other instance, about a Muslim woman who had been a witness of a terrorist attack. In a Washington Post story, they had found the (Russian) tweet (about the Muslim woman) and they removed it, and they replaced it with a tweet that shared a similar sentiment, a (white supremacist) Richard Spencer tweet.

I think it was a nice way of summarizing our concern. Because (the tweets) are not only not verified, but journalists specifically use them because they’re so emotionally charged, they convey these really strong partisan opinions. We’re basically looking for evidence of partisanship so we can report it, and it becomes really dangerous when we don’t know where those partisan opinions come from.

The study notes that those “string-of-tweet” reaction stories were especially susceptible to fake Tweets. Is there a usefulness for that story format, or are “Twitter reaction” stories more trouble than they’re worth?

Discover Madison news, via the Cap Times

Sign up for the Cap Times Daily Features email!

* I understand and agree that registration on or use of this site constitutes agreement to its user agreement and privacy policy.

So we had seen a story where in the story that the (Russian) tweet had gotten embedded it was about a dog who was playing the piano, and how social media got so excited about this dog playing the piano. And I remember reading the article and being like, there’s no real journalistic value to it.

From the perspective of our study, we’re not saying that man-on-the-street interviews or tweets are inherently important or not important, but when you do use them, you have to ask yourself, what’s the newsworthy value?

We have these stories because people click on them ... but for journalists this is where the business versus the ethics come into play, right? And this is one area where I feel like we would have to really as a community push back and say, “It really doesn't matter if this is an incredibly click-baity, popular story if it serves no journalistic value, we should not consider posting it or publishing it.”

How did media organizations deal with Russian tweets in their stories once they were aware of the problem?

There’s been a couple of different responses. The first one is no response, whatsoever, and that seems to be the most common. The exceptions to this have been organizations like the Huffington Post, Washington Post, and Vox, which all of them have either after the Recode article or after our study had retroactively written editor notes or corrections saything “there was a tweet that was found here that was from a Russian sockpuppet.” And then a couple of news organizations opted to delete the tweet after finding it.

There’s a question of, what do I do as a journalist to make sure this doesn't happen again? Whenever we do man-on-the-street interviews, we ask, right? We have to make sure: “Are you okay with me interviewing you and audio recording you?” but we don't have similar practices in place when we take somebody’s tweet online. This question of, “How much verification do you need to use a tweet?” is really important.

A few days ago a Slate author wrote a piece essentially explaining, “We had one of these tweets in a story and this is how it happened.” What’s it like for you to see these real journalists reckoning with their past decisions? 

I think that’s so important and I really admire that kind of work. One of the best consequences of this particular study for us was several news organizations had reached out to us and said, ‘We’re trying to find the story that included the tweet, can you give us more information? We do want to issue these corrections, we wanted to let you know that we’ve posted new corrections or editors notes based on your information.”

That is not only admirable, but it ensures credibility within the news organization. I think the worst thing that a news organization can do is ignore it.