#ElectionWatch: Bots in Virginia?

Assessing Twitter traffic ahead of a gubernatorial vote

Bots — automated Twitter accounts masquerading as human users — were busy ahead of Virginia’s gubernatorial election on November 7, boosting both Democrat Ralph Northam and Republican Ed Gillespie.

Bots are an ever-present reality on Twitter; in political debates, the main question is how much they distort the debate. To check for distortion, @DFRLab scanned tweets mentioning the candidates’ Twitter handles (@EdWGillespie and @RalphNortham), Gillespie’s campaign handle (@EdForVirginia, promoted on his Twitter profile), the two men’s surnames, and the hashtag #VAGov.

The scans showed that bots were present, but did not appear to be surging sufficient quantities to fundamentally distort the debate. We observed an imbalance between the two candidates, with Northam and his supporters apparently gaining more traction than their rivals, despite a relatively equal number of followers, but this does not appear to be primarily due to bot traffic.

Posts and popularity

In terms of Twitter followers, Gillespie and Northam appear relatively evenly matched. Northam had 47,000 followers as of November 6; Gillespie had 43,000. Gillespie’s team, @EdForVirginia, had just under 5,000.

Twitter profiles for Ed Gillespie (left) and Ralph Northam (right). (SourceTwitter)

Using a machine scan, we collected the most recent 100,000 tweets to mention the two candidates’ handles, and the most recent 100,000 tweets to mention their names or #VAGov. A separate scan of @EdForVirginia returned 5,300 tweets for the seven days to November 6. To allow for a closer comparison, we did not collect data on individual divisive “wedge” issues, which tend by their nature to attract one-sided traffic, but focused on messages naming the candidates themselves.

Despite the two candidates’ relatively even followings, Northam’s Twitter handle was mentioned roughly three times more often than Gillespie’s.

Left, the accounts mentioned most often in the scan of Twitter handles. Right, the accounts mentioned most often in the scan of surnames.

Posts from, or supporting, Northam were far more popular — measured by the number of retweets — than posts supporting Gillespie. In the scan of the candidates’ handles, nine of the ten most-retweeted posts supported Northam. The exception being a tweet from Republican Senator Ted Cruz.

The ten most-retweeted posts from the machine scan of mentions of the two candidates’ handles; note Cruz’ tweet at the bottom.
Mentions of the candidates’ names, without the Twitter handle.

Gillespie’s name was mentioned more often than Northam’s, but many of those mentions were hostile; the ten most-retweeted posts revealed by the scan of the candidates’ names mentioned him over 16,000 times, all negatively.

The ten most-retweeted posts from the scan of the candidates’ surnames; note how often Gillespie is mentioned negatively.

Bots: present but not dominant

Analysis of the traffic suggests that it was largely organic, but with some bot presence. In the mentions of the two candidates’ handles, the 100,000 tweets were generated by 38,854 users, for an average of 2.6 tweets per user. In mentions of the candidates’ names, the average was 2.0 tweets per user.

@DFRLab observed average ratios of tweets per user in the range of 1.5–2.0 in organic traffic; the presence of large numbers of bots can give a figure of over 10 tweets per user. Thus the mentions of the candidates’ names appear largely organic, while the mentions of the accounts suggest a moderate bot influence.

A closer analysis of the traffic shows that apparent bots were used to promote both sides. Given the anonymous nature of fake accounts, it is not possible to say who was operating them.

Some accounts posted very high numbers of retweets. For example, @AnneReeder6, an account created in January which has no avatar picture, posted 176 mentions of Northam between November 3 and November 5. Every one was a retweet. As of November 6, every one of its last 100 posts was also a retweet. This is botlike behavior.

Profile for @AnneReeder6; note the lack of avatar or background image, or any personal features. (Source: Twitter)

On the other side, the account @Cheryl_P12, which supports Gillespie, also showed botlike characteristics. Created on January 6, the account posted 140,000 tweets and likes by November 6, for an average rate of 459 engagements per day. As of November 6, 99 of its last 100 posts were retweets, suggesting it is largely automated but with some user interaction.

Individual posts appear to have benefited from significant bot boosting. A tweet by Northam supporter Jason Kander, posted at 23:46 UTC on November 4, enjoyed a sudden spike in retweets over two hours later, at 01:59 UTC on November 5. The surge was sudden enough to cause a needle-like spike in the timeline of posts, characteristic of artificial amplification.

Timeline of posts overnight from November 4–5, showing the sharp spike just before 02:00 UTC.

The tweets came very rapidly, with multiple accounts retweeting the post in the same second.

Layout of tweets posted in the two seconds from 01:59:09 to 01:59:11 UTC. This search is not filtered for posts from Kander; it is a raw result, indicating how retweets of his post flooded the traffic.

Some of the accounts involved in this spike showed botlike characteristics, posting almost exclusively retweets. Some also retweeted content from the same accounts, and even identical posts, not just the Kander post.

Retweets of a tweet by @eclecticbrotha, posted by four of the accounts which also retweeted Kander at 01:59.

The combination of simultaneous retweeting of one post, and retweeting identical content from various other users, indicates that this is either botnet or a unusually coordinated user group trying to amplify pro-Northam messages.

A post accusing Northam of “serious ethical and legal issues”, posted by Gillespie supporter Rick Canton on November 2, also appeared bot-boosted.

It was retweeted, for example, by @AliasHere, an account created in December 2010, which had posted 269,000 tweets and 202,000 likes by November 6, at an average rate of 184 engagements a day.

Profile page of @AliasHere. (Source: Twitter)

It was also retweeted by @Stumpy1258 (screen name “Tom Pronk”), a faceless account which, by November 6, had only posted 966 tweets, but 49,100 likes, an improbable imbalance for a human user. All of its 100 most recent posts were retweets, indicating a probable bot.

Profile page for @stumpy1258. (Source: Twitter)

The vast majority of bots revealed by this study were political, posting exclusively political content attacking or supporting one of the candidates. A few, however, were non-political, and seemed to be engaged by chance. @Wonce_In_A, for example, retweeted posts referring to the election as a “once in a lifetime” poll 69 times in succession:

Retweets by @Wonce_In_A, from machine scan.

Created in July 2017, the account had posted 11,000 times by November 6, a botlike rate. However, every recent tweet used the phrase “once in a”, suggesting this is a non-political bot set to share that phrase.

Tweets from @Once_In_A; note the use of words. (Source: Twitter)

The inclusion of such bots in overall traffic is entirely to be expected. Bots are, after all, automated accounts, which cannot judge the relevance of the content they share. If they tweet high volumes in a short space of time, they can distort the overall traffic; as such, they can be political in effect, without necessarily being political in cause.

Conclusion

Bots have definitely been present in the final days of the Virginia race, amplifying posts on both sides. These are mainly political bots, not commercial ones (which typically post a wide range of content in a wide range of languages); they indicate that supporters on both sides have been looking for a covert edge in communications.

In our investigation of search terms specific to the candidates, bot impact appears limited. The ratio of tweets per user, which is one of the classic signs of bot presence, remains low, indicative of some distortion, but not wholesale manipulation. Sudden spikes in the timeline of posts, another indicator of bot presence, have also been limited. The most notable exception being the spike at 01:59 UTC on November 5.

Overall, therefore, the flow in traffic appears to have been largely organic. Bots are present, as they so often are on Twitter; but they do not seem to have triggered a wholesale distortion of the debate.


Ben Nimmo is Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).

Follow along for more in-depth analysis from our #DigitalSherlocks.