Film

How YouTube failed the 2020 election test

Views: 215

Today let’s talk about a comprehensive new report on election integrity, and the particularly low marks it gave to one platform in particular.

I.

The 283-page report, which was published today, is called “The Long Fuse: Misinformation and the 2020 Election.” It is the final work of a coalition of some of the most respected names in platform analysis in academia and the nonprofit world: the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, Graphika, and the Atlantic Council’s Digital Forensic Research Lab.

The report builds on work that the partnership did leading up to and after November to identify and counter false narratives about the 2020 US presidential election. It describes its goals this way:

The EIP’s primary goals were to: (1) identify mis- and disinformation before it went viral and during viral outbreaks, (2) share clear and accurate counter-messaging, and (3) document the specific misinformation actors, transmission pathways, narrative evolutions, and information infrastructures that enabled these narratives to propagate.

The hope was that by better understanding how misinformation spreads on social networks, the partnership could push platforms to develop better policy and enforcement tools to reduce the impact of bad actors in the future.

Reading through the report, there’s a lot to be impressed by. Foreign interference, which all but defined the 2016 US presidential election, played almost no perceptible role in 2020. After making huge investments in safety and security, platforms really did get better at identifying fake accounts and state-backed influence campaigns, and generally removed them before they could do much about them.

The flip side of this, of course, is that 2020 gave US platforms an arguably even more difficult problem to confront: the virulent spread of election-related misinformation from domestic sources, most prominently President Trump, his two adult sons, and a potent ecosystem of right-wing publishers and influencers. Perhaps the report’s most crucial finding, however obvious, is that misinformation in 2020 was an asymmetric phenomenon. The lies were primarily by right-wing actors in the hope of overturning the result of an election that, despite all their viral posts to the contrary, saw no widespread fraud.

The report makes clear that the platforms did not cause these lies to be spread. Nor does it seek to make a case that these lies spread primarily through algorithmic amplification. Rather, it places platforms at the center of a dynamic information ecosystem. Sometimes the lies were “top down” — fabricated by Trump and his cronies and then turned into content by partisan media outlets and right-wing influencers. Other times, the lies were “bottom up”: shared by an average citizen as a tweet, a Facebook post, or a YouTube video, which was then spotted by Trumpworld and amplified.

These processes worked to reinforce each other, creating powerful new narratives that ultimately fueled the rise of previously obscure outlets like One America News Network and Newsmax. And in all of that, there is plenty for every platform studied here to answer for.

The report faults platforms for failing to anticipate and “pre-bunk” likely election misinformation; failing to examine the efficacy of their efforts to label misinformation or share those findings with external researchers; and often failing to hold high-profile users accountable for repeated violations of platform policies, among other issues.

Still, in both the report and a 90-minute virtual event that the partnership held Wednesday, I was struck by the unique — and, to my mind, under-discussed — role that YouTube played in the election.

So let’s discuss it.

II.

The day after the election, I wrote here about how YouTube was being exploited by the right wing. Unclear policies, inconsistently applied, combined with opaque or misleading labels had made YouTube a playground for hyper-partisan outlets. Uniquely among platforms, YouTube’s partner program enabled many of these corrosive videos to earn money for their channels — and for YouTube — through advertising.

The EIP report picks up on all these themes and more, fleshing them out with new data and explaining the special role YouTube played in cross-platform misinformation campaigns.

Here are three key observations from the report.

One, for misinformation narratives tracked by the project using Twitter’s API, YouTube was linked to more than any other platform. For tweets containing links to misinformation, YouTube ranked third among all domains, behind Gateway Pundit and Breitbart. Researchers tracked 21 separate incidents, generating nearly 270,000 retweets, that pointed to YouTube. The next-highest ranking platform, at 17th, was Periscope; Facebook does not appear on the list.

This finding speaks to the way YouTube serves as a powerful library for hoaxes and conspiracy content, which can continuously be resurfaced on Twitter, Facebook, and other platforms via what the report calls “repeat spreaders” like Trump and his sons.

“It was kind of a place for misinformation to hide and be remobilized later,” said Kate Starbird, an associate professor and co-founder of UW’s Center for an Informed Public, in a response to my question during Wednesday’s event. “From our view, it was a core piece of the repeat spreading phenomenon, and a huge piece of the cross-platform disinformation spread.”

YouTube disputes this conclusion and says its rank on this chart is more of a reflection of the site’s popularity in general than a comment on the accuracy of the information found there. Other sites, including The Washington Post, ranked high on the list because they contained information debunking false claims rather than advancing them. “In fact, the most-viewed election-related content channels are from news channels like NBC and CBS,” YouTube spokesman Farshad Shadloo told me.

Two, YouTube’s library of misinformation was enabled by policies that tended to be more permissive than similar ones from Facebook and Twitter. An analysis of platform policies leading up to the election found that in August 2020, YouTube failed to adopt comprehensive policies related to misinformation about how to vote, incitements to voter fraud, or efforts to delegitimize election results. By the end of October, the only significant change YouTube made was to adopt a comprehensive policy about voting procedures, researchers said.

Meanwhile, Facebook, Twitter, and TikTok all implemented comprehensive policies designed to thwart efforts to delegitimize the election. (In fairness to YouTube, the report’s policy analysis still ranked it above NextDoor and Snapchat, which were found not to have adopted comprehensive policies in any of these areas.)

“YouTube lagged in terms of their implementation,” said Carly Miller, a research analyst at Stanford. “Things were able to propagate on the platform because of that.”

YouTube disagrees with this conclusion as well and sent me a long list of policy changes it had made over the past year, including some that were copied by its peers. “As we’ve publicly discussed, we don’t agree with EIP’s framing of our policies or our efforts,” Shadloo told me. “Our community guidelines are generally on par with other companies and we launched several products in 2018 and 2019 to raise authoritative content and reduce borderline videos on our site.”

Finally, the report found that every platform struggled to moderate live video in particular. Some videos containing lies about the election attracted millions of views before they received so much as a label.

“All platforms struggle with labeling,” said Nicole Buckley, a research analyst at UW. “But in particular YouTube had issues with adapting to embedding labels in new forms … of content sharing.”

Ultimately, the EIP reached very different conclusions about YouTube’s performance in the 2020 election than YouTube itself did.

“This is a cross-platform, cross-media set of issues where each part of the ecosystem is leveraged in a different way,” Shadloo said, echoing a conclusion drawn by the EIP researchers. “No two platforms face the exact same challenges, and … interventions that make sense for one may not for another.”

On that point, YouTube and the EIP agree. But for the most part, I have the same concerns about the platform that I had in November.


This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.

Tags: , ,
Microsoft’s first-gen Surface Headphones are down to their lowest price yet
Elgato’s new Light Strip can make your setup look more chill

Latest News

Film

Cars

Artificial Intelligence

SpaceX

You May Also Like