Tens of thousands tweeted during the Black Lives Matter protests on #DCBlackout about a network failure that never happened. A lesson on the power of uncertainty.
Marcie Berry sounds alarmed. “Currently NO INFO comes from D.C.,” she tweets on June 1 at 3:26 a.m. about the Black Lives Matter protests in front of the White House. “No streams, no posts, pictures or videos. Everything stopped at the same time.” Berry sees this as the next step in the escalation by the police: “They start killing and try to hide it with jammers.” It provides the tweet with the hashtag #DCBlackout.
Berry’s Twitter profile picture shows the comic drawing of a young white woman with long, purple hair. Your account has a little over 200 followers. She is one of the first to report alleged internet disruptions in the U.S. capital. But then the hashtag #DCBlackout spreads extremely quickly on Twitter – and with it the tale that U.S. authorities are responsible for the Internet being paralyzed in the capital. Presumably to act against the protesters who have been besieging the White House for days and have caused Donald Trump to hide in a bunker.
When Alex Engler wakes up in Washington, DC, a few hours later and looks at his Twitter timeline, he is amazed. He sees a flood of tweets that use the hashtag #DCBlackout to report that nothing is digital anymore in his hometown. But Engler’s Internet works flawlessly. It quickly became clear to him that it was a disinformation campaign. Engler has been researching this topic for years, first in Chicago, now in the Brookings Institution think tank in Washington. He takes a closer look at who is spreading the news of internet blackout in the U.S. capital. He notices that many accounts that spread the rumour have only recently been created. Most of them have never tweeted before or only very little.
“These are clearly fake accounts” twitters Engler at 8:14. At this point, however, he no longer has a chance against the supposed message from the shutdown of the Internet. Because 35,000 accounts had already sent more than half a million tweets with the hashtag #DCBlackout – and thus promoted it in Twitter’s trending topics.
Bot or not?
Elsewhere on Twitter, debates quickly flared up as to whether the rumour of internet blackout in D.C. was spread and spread by so-called bots. These are social media profiles, behind which there is not a human, but a machine. In contrast to bots that officially retweet specific hashtags or do funny things, social bots hide that they are operated mechanically.
Since the U.S. election campaign in 2016 and the Brexit vote in the U.K., some have been concerned about the impact it can have on discussions in social media if such machine-fed accounts speak up in debates, spread hashtags, and intensify tweets. Argue scientists and I.T. experts bitter about the role of these robots accounts: While some are convinced that they can massively affect discussions on Twitter – and thus undermine democratic processes, others are confident that it spread social bots that systematic misinformation does not exist. Or that they don’t play a central role.
Because it’s complicated. Marcie Berry, the Twitter user who launched the story of the nightly disappeared Internet in DC: Is there a social bot behind her account? A real person? Or anything in between? What is spread by man and what by machine, what is covered with honest intentions and what is used to troll and control democratic decision-making and emotional movements is often confusing and difficult to distinguish. Even bot researchers and their classification tools often fail to classify correctly ( SSRN: Rauchfleisch, A. et al., 2020 ). Twitter and Facebook are now doing some work to ban inauthentic accounts from their platforms. However, this also proves to be difficult in practice.
On the hashtag #DCBlackout, you can learn a lot about how rumours are spread and spread in social networks these days. About the role social bots play and which professional trolls, gullible users, who spread nonsense unchecked, and the sophisticated control of disinformation campaigns.
As soon as researcher Engler expressed his suspicion of fake accounts on June 1, he was promptly accused of falling for the government’s strategy himself. “I saw no evidence that there was a blackout,” Engler argues. The answer: “The fact that there is no evidence is exactly the point.” There were said to be no tweets from protesters in D.C. for six hours.
Such distrust is hardly surprising. Evidence of brutal police violence in the U.S. is spreading almost daily through social media. Black Lives Matter activists have long trusted the state and its authorities to use such methods – the perfect breeding ground for spreading a rumour that seems angry to people who are upset. “These are people who understandably have no trust in authorities at all,” says Engler. “People who distrust experts are more likely to misrepresent information.”
Just before Berry and others started tweeting about the #DCBlackout that night, dramatic pictures from Washington appeared on Twitter. They showed beating police officers and burning government buildings. Photos that can be seen as another piece of the puzzle are to be taken up in the narrative of a police force that blocks the Internet to escalate. That the burning government buildings screenshots from the Netflix series Designated Survivor were, it turned out much later – when the suspicions were already sown.
Indications of coordinated disinformation
Engler is convinced that someone has taken advantage of the already heated atmosphere surrounding the protests in Washington. As yet unknown people had programmed bots, automatic accounts that retweeted tweets with the hashtag #DCBlackout.
This is not only supported by the fact that Engler claims that he has seen some visible bot accounts in action: Twitter had also become active in the early morning hours of June 1 and had automatically identified and deleted many accounts as bots. And the platform’s detection systems are usually correct, at least when it comes to purely automatic reports, says communication scientist Darren Linvill, who checked Twitter’s bot detection systems.
“Real bots are technically easy to identify,” he says. Of course, you have to use more features than many of the non-working bot detection programs, which often use the tweet density as a decisive factor. But there are quite stable features that he does not want to name so as not to tell the attackers how they could better camouflage themselves.
Linvill examined an extensive data set of accounts suspended from Twitter in 2018, including three million tweets from 3,800 accounts, which among others were accused of being part of a Russian disinformation campaign. His result: Twitter rated the profiles correctly as social bots. “Only a dozen were real people. An extremely low error detection rate,” says Linvill.
Darius Kazemi of the Mozilla Foundation, who is usually sceptical when it comes to attributing bots to activities in social media, is also sure in this case: There is a coordinated disinformation campaign behind #DCBlackout.
Most of the social bots that Engler said were used were not particularly complicated: They were primarily used to disseminate content related to the hashtag #DCBlackout. But this is the first step in the manipulation. “Because if you as a user see that such a tweet has many interactions, you tend to trust the content,” says Engler. Also, Twitter’s algorithms jump to high-reach hashtags – and continue to popularize the topic – unless, as happens with #DCBlackout, the platform actively prevents it.
Technically, it is straightforward to program corresponding social bots, says Engler. He describes the # DCBlackout campaign as “rather culturally sophisticated” – because rumours were cleverly spread and deconstructed.
There is not much else known about the communication campaign – or maybe it will never find out. For example, whether the attackers launched the rumour themselves or just hijacked a misinterpretation from the activist scene. But Linvill and Engler see evidence that they had planned the course correctly.
The fact that Twitter had already deleted numerous accounts early in the morning of June 1 further fueled the discussion. Because it strengthened the theory of some users that critical voices would be suppressed. And any attempt to make it clear that it was a hoax seemed just as suspicious.
Another phenomenon that Engler observed points in a similar direction. On the morning of June 1, a user sent him a screenshot of numerous other accounts that declared the blackout to be misinformation. “Can I have your opinion, then, please?” She asks. Continue. They all write that they live in the region themselves, know people who commute to D.C. They all use the same wording. “This hashtag looks like misinformation,” the tweet. “Stop unsettling people.” Always the same story. Still the same text. For Engler, there is no doubt that this is also an organized campaign. But this time to correct the previous disinformation. But: what sense should that make?
A question that can be answered quickly if you look at the effect. When Engler first saw these tweets, which were trying to correct them, many activists had already formed their opinion. Her reading: Now “the powerful” use social bots to suppress tweets about the blackout. The confusion of what is real and what is not is perfect.
If one assumes that there is a campaign behind the entire hashtag, then an effect has been exploited here, which researchers have been discussing for some time: The liar’s dividend means that it is not only about the public spreading of lies is about sowing doubts about facts, but also about undermining the credibility of authentic instances. Around #DCBlackout, those who exposed the rumour were additionally discredited as untrustworthy – since apparent bot accounts spread their arguments. Means: The actual trolls behind the hashtag helped to present their story – and only made it more credible.
“A brilliant Russian move.”
“Creating this level of doubt is a brilliant Russian move,” says disinformation researcher Linvill. However, he means this more generally – with a specific assignment, he is careful who is behind the inauthentic accounts around the # DCBlackout hashtag. The hashtag may have been strengthened on behalf of Russia, but other forces are also interested in destabilizing the situation in the United States. “So it could also be a copy of a Russian strategy that has been successful in the past.” A strategy that was primarily about spreading insecurity and sowing mistrust, thereby weakening trust in democracy. Linvill repeatedly observed this before the 2016 U.S. election. Russian Internet Research Agency paid numerous people to spread disinformation about U.S. politics on Twitter.
However, Linvill currently sees a turnaround in the attackers’ strategy – away from pure social bots to “cyborg accounts” as he calls them: accounts that are initially controlled by machines and then by people. Behind it could be, for example, people who in Troll factories in Russia or elsewhere distribute messages and hashtags either manually or under machine control.
But these, too, says Linvill, should not be overestimated – they only intervene in a very targeted but efficient way to reinforce topics that were often already there. “Conspiracy theories mainly spread organically, there are people with strange views, and when they are reinforced by the Twitter algorithm, it looks inauthentic, but can be real. That makes our work a lot more difficult.”
If you look at the Twitter history of Marcie Berry – the woman who spread the # DCBlackout rumour so early – you conclude that the account is more likely to be a person than a machine. She is believed to be an angry protester, shocked by police violence, who writes a lot about the protests on Twitter. But maybe she is also a young man who causes trouble in the United States somewhere on behalf of a state or organization, and who developed Marcie Berry as one of many so-called personas: as a coherent personality on Twitter, the looks as authentic as possible and can hardly be revealed as a fake.
Here, too, the liar’s dividend comes into play. In the end, only someone who knows Marcie Berry personally can clarify with certainty whether there is an authentic person behind her account or not. According to Engler, this is also part of the attackers’ plan: “They want to spread uncertainty. If we can no longer trust each other, they will have achieved their goal.”
According to Engler and Linville, hacked accounts were also used to reinforce the distribution of #DCBlackout: profiles that real users had been using for years, but to which they lost access due to identity theft. “There is a black market with hacked accounts for such cases,” said Engler. The researchers also had Darius Kazemi investgate by had contacted the owners of these accounts.
However, Kazemi is satisfied that the social bots involved were only used for reinforcement; in this case, not to formulate content actively: training artificial intelligence on it is tough. Engler agrees: “Machine language production is not yet good enough to produce credible content. A few sentences, yes, but an entire misinformation campaign?”
But above all, it is not necessary: To create uncertainty and to weaken trust in democracy, neither sophisticated artificial intelligence nor the dreaded DeepFake videos are required. A few paid trolls and many simple social bot accounts that don’t even pretend to be human are enough – and they just retweet existing rumours. But above all, people are needed where disinformation falls on fertile ground and resonance. That should worry us more.