YouTube says removing videos isn't enough to combat COVID-19 misinformation

The company's chief product officer wrote that deleting content as quickly as possible doesn't fix the problem.
By Emily Olsen
12:06 pm
Share

Photo: AleksandarNakic/Getty Images

Removing videos that spread misinformation isn’t enough to contain the problem, wrote YouTube’s chief product officer in a blog post.

“For COVID, we rely on expert consensus from health organizations like the CDC and WHO to track the science as it develops. In most other cases, misinformation is less clear-cut. By nature, it evolves constantly and often lacks a primary source to tell us exactly who’s right,” wrote Neal Mohan.

“In the absence of certainty, should tech companies decide when and where to set boundaries in the murky territory of misinformation? My strong conviction is no.”

Mohan said the video giant’s strategy is to increase the amount of “good” content while removing videos that violate YouTube’s policies, which focus on content that can “directly lead to egregious real-world harm.”

He said YouTube has removed more than a million videos that spread dangerous COVID-19 misinformation, like fake cures or claims that the pandemic is a hoax.

But Mohan argued being too aggressive with content removal would have a chilling effect on speech.

“Removals are a blunt instrument, and if used too widely, can send a message that controversial ideas are unacceptable. We’re seeing disturbing new momentum around governments ordering the takedown of content for political purposes,” he wrote. 

“And I personally believe we’re better off as a society when we can have an open debate. One person’s misinfo is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact checker’s scrutiny.”

WHY IT MATTERS

YouTube, like other social media companies, has faced plenty of criticism for allowing misinformation to circulate on its platform and for directing users to content that spreads false claims.

A study published in BMJ Global Health found more than a quarter of the most viewed YouTube videos on COVID-19 contained misinformation in March 2020. 

Another study published in the Journal of Medical Internet Research in January found YouTube had boosted search rankings for pro-vaccine content to counter anti-vaccine videos, but when users came to anti-vaccine content from another site, YouTube’s recommendation algorithm would send them more anti-vaccine information.

In 2020, the Mozilla Foundation launched a browser extension that allowed volunteers to report YouTube videos that they “regret watching—like pseudoscience or anti-LGBTQ+ content.” 

According to a report by Mozilla published in July, the recommendation algorithm was a major source of regrettable content. More than 70% of the videos flagged by volunteers were accessed through YouTube’s automatic recommendation system. 

THE LARGER TREND

YouTube has taken some steps to push verifiable health information. As COVID-19 vaccines rolled out to the public, the video giant also teamed up with public health experts and celebrities to provide factual information about the shots. 

In January, YouTube announced a team that would bring more high-quality medical information on its platform, led by Dr. Garth Graham, former U.S. deputy assistant secretary of health.

"For a garden to grow, you remove the weeds and you plant the seeds," he told MobiHealthNews when the team was launched. 

"The removal of misinformation, which is evidenced by YouTube’s vaccine policies, that is part of the weed removal. The way we look at this is, once you remove the weeds and there’s a vacuum of information, how do you plug in that information so people are able to get what they need?"

Share