Memes on Pinterest gamify polarization in Canadian elections
The algorithm that determines Pinterest users’ subsequent content appears to lead those same users down rabbit holes of increasingly hostile political memes after they click on a particularly partisan image. This is not dissimilar to discovery algorithms on other content platforms, which are designed to show a user more of the types content with which a user engages most. For the platform, this gamifies more of the user’s time, but for the user, it drives at base user behavior.
Using memes focused on Canadian Prime Minister Justin Trudeau, open-source analysis of how the platform’s algorithm operates revealed how Pinterest may contribute to the dissemination of extremist memes as recommended popular items on its app. The DFRLab monitored Pinterest for the 15 days ahead of the Canadian election (held on October 21) until October 25 allowing for the study of electoral mis- and disinformation on the platform pre- and post-election. The results of the study showed that, by clicking on only a single hyperpartisan and often hostile meme, the platform would recommend other politically intense memes.
According to Stat Counter, a web traffic analysis company, as of October 2019, Pinterest retains 22 percent of the Canadian social media user base across all popular social media platforms, second only to Facebook, which accounts for a commanding 53 percent of traffic from Canadian social media users. As the election season ramped up, Pinterest, as with other platforms, became a repository for meme-based political smear campaigns. When algorithms operating at scale recommend politically charged memes to users who have viewed similar content, they risk exacerbating existing social divides and promoting further political polarization.
The mechanics of Pinterest
While it is the second most used platform in Canada, Pinterest is also not a frequent topic of conversation when it comes to mis- and disinformation and, as such, merits some basic description of what it is designed to do.
Pinterest is comprised of “pins,” which are basic units like posts on Facebook or tweets on Twitter. The pin can be an image or a link to videos. The Homepage or User’s Timeline shows pins recommended by Pinterest that are relevant to user’s search history and other factors. This page is comparable to a Facebook user’s timeline.
Users usually look for pins relevant to their search and can click on one to check it out. After a user clicks a pin, Pinterest recommends personalized pins based off of their popularity on the platform and the relatedness of the pin first clicked on, leading the user to a “More ideas personalized for you” page.
This personalized section is generated by Pinterest’s algorithm to better recommend products or content. Users can compile pins in one group called a “board.” Boards serve a similar purpose to a Facebook page by aggregating topically similar pins into a single page.
From anti-Trudeau memes into a far-right ecosystem
To conduct neutral analysis on the algorithm, the DFRLab used incognito mode on a Firefox browser with no accounts logged in and thus lacking a search history. Conducting a search without logging in (using the Pinterest-guest browser extension) ensured that any previous searches would not affect the recommendations. The incognito browser also prevented the storage of cookies, which may also be used to track a user’s search history.
The DFRLab searched a series of specific relevant keywords on Pinterest’s search engine. A keyword search of “Trudeau” on the home page returned a few memes regarding the prime minister without requiring a lot of sifting through other pins on the home page.
Two of the memes were explicitly anti-Trudeau, which the DFRLab used as a starting point for its search. If a user clicks on a series of political memes, without engaging in any other type of interaction, such as liking, sharing, saving, or commenting, what sort of content would Pinterest’s content algorithm recommend?
If a user clicked on an anti-Trudeau meme pin, the algorithm recommended a host of additional content hostile toward Trudeau.
The problem is not regionally limited, however, as the platform pushed not only anti-Trudeau memes but also other memes that were far beyond Canadian politics. For instance, after exploring two anti-Trudeau memes, a couple of recommendations were of anti-Hillary Clinton memes, indicating a far-right meme ecosystem.
It also appears as if the algorithm is not solely based on object recognition as the recommendations after a single click also appeared to glean sentiment from the images. For instance, a relatively banal photo of Trudeau paired with anti-Trudeau text nevertheless introduced the user into a universe of right-wing memes. From clicking on a single anti-Trudeau meme, the user was also exposed to anti-Hillary, anti-#MeToo, and anti-Alexandria Ocasio-Cortez memes, among others. Within this cluster of memes, the topics they touched on ranged from climate change to immigration to feminism.
Clicking through this recommendation pathway does not require significant interaction with the images themselves, in fact it works on a minimum interaction and maximum recommendation format. If a user clicks on a single pin, topically related pins would be introduced into the user’s timeline. The DFRLab used an account that had no search history for Justin Trudeau to determine whether these memes would come up on a user’s home feed. And indeed anti-Trudeau memes popped up on user’s timeline.
In 2018, Pinterest rolled out a new algorithm named “Pixie.” According to Pinterest’s research, the algorithm has improved from its previous algorithmic framework and operates across Pinterest’s multiple sub-platforms, including Homefeed and Related Pins. It is unclear, however, whether content recommendations are delivered by Pixie.
There could be other factors that could be fueling these recommendations, including ghost profiles that increase the overall number of anti-Trudeau boards. For instance, a couple of the accounts with anti-Trudeau memes posted only a single pins or hosted a board dedicated to similarly hostile content, thus acting exclusively as amplifiers of anti-Trudeau messaging on the platform.
The DFRLab further confirmed this analysis by having an employee who had never been on the platform before create a new account using Opera, which had similarly never been used before. After choosing an initial five (notionally unrelated-to-politics) topics — as is required by the platform — a single click on an anti-Trudeau did not affect the make-up of the account’s home page. After logging out, however, the home page was riddled with anti-Trudeau memes upon logging in again.
While the DFRLab found a far-right ecosystem that results from a single click on an anti-Trudeau meme, the content recommendations are not limited to far-right content. A simple search for “Trump” from a blank, search-free Pinterest profile yielded both far-right and far-left pins.
When the user clicked on a single far-left (e.g., anti-Trump) pin, as with the Trudeau content, his or her home page would similarly be overtaken by far-left content.
Such ecosystems, known as “filter bubbles,” reinforce the strength of a user’s opinions by isolating the content to only that which conforms to his or her preexisting beliefs and values.
The DFRLab shared this research with Guillaume Chaslot, founder of AlgoTransparency and an advisor at the Center for Humane Technology, who stated:
Most recommendation engines are designed to increase user “engagement.” The easiest way for algorithms to achieve that is often to promote very similar content to what the user clicked on. This creates a “filter bubble,” where users only see one type of content.
In Pinterest’s case we did find that it creates a political echo chamber of memes that may enable a user to subscribe to a single viewpoint, to which Chaslot further added, “it makes it difficult for users to have a balanced view of reality.”
As a visual medium, memes can act as particularly viral projections of political opinions. For that reason, they are often a particularly salient choice to propagate conspiracy theories and disinformation. When content recommendation systems cluster memes based on their political qualities, they risk compounding those harms.
Kanishk Karan is a Research Associate with the Digital Forensic Research Lab (@DFRLab).
John Gray is a Visiting Research Fellow with @DFRLab.
Follow along on Twitter for more in-depth analysis from our #DigitalSherlocks.