Study: YouTube Achieves Some Success at Hobbling Conspiracy Theories | Social Networking

By John P. Mello Jr.

Mar 3, 2020 10:14 AM PT

YouTube’s efforts to reduce the spread of conspiracy theories on the platform appear to be bearing fruit.

“Our analysis corroborates that YouTube acted upon its policy and significantly reduced the overall volume of recommended conspiratorial content,” three researchers wrote in a study the University of California, Berkeley, released Monday.

Because information is spread on YouTube largely through recommendations — 70 percent of watched content is recommended content — the researchers spent 15 months studying 8 million recommendations from the platform’s next-watch algorithm.

Starting in April 2019 — three months after YouTube announced measures to limit content on the platform that could misinform users in harmful ways — the research team found a consistent decrease in conspiratorial recommendations.

The decline continued until the beginning of June 2019, when the raw frequency briefly hit a low point of 3 percent, noted researchers Marc Faddoul and Hany Farid of UC Berkeley, and Guillaume Chaslot of the Mozilla Foundation.

Raw frequency is the product of the number of times a video is recommended and the probability that each video is conspiratorial, the researchers said.

It appears there has been some slippage, though. Since last year’s low point, conspiratorial recommendations have rebounded steadily, the study notes. They now are 40 percent less common than when YouTube first announced its measures.

Further, “despite this downtrend over the past year, the overall volume of conspiratorial content recommended from informational channels remains relatively high,” the researchers found.

Selective Targeting

“Tribute ought to be paid to YouTube for effectively filtering out some dangerous themes, such as claims that vaccines cause autism,” the researchers wrote.

“Nonetheless, other themes which we showed to be actively promoted by YouTube were described by the FBI as very likely to motivate some domestic extremists to commit criminal, sometime violent activity,” they pointed out.

A closer look at the data reveals that YouTube is selective in how it applies its “harmful content” policies, research scientist Faddoul noted.

“Some political and religious conspiratorial content hasn’t been demoted, while other topics, like the coronavirus, haven’t had any form of conspiratorial disinformation being promoted,” he told TechNewsWorld.

Given the superior data, labeling and computational resources available to YouTube, it is technically capable of detecting conspiratorial topics with high accuracy, the study notes.

“In fact, for certain topics which seem to fall under particularly close scrutiny, recommended videos are effectively stripped from disinformation,” it states. “It is encouraging to see that YouTube is willing to target specific issues effectively and in a timely fashion. Deciding what to demote is therefore a question of policy more than technology.”

Maintaining Neutrality

YouTube’s selectivity stems from all social media’s desire to have as little content intervention as possible, Faddoul explained.

“They want to remain as neutral as possible,” he said. “They don’t want to be criticized as biased, as they often are by conservatives who see a liberal bias in Silicon Valley.”

There are legal reasons to maintain neutrality. Under federal law, YouTube and other social media outlets are treated as platforms. As such, their legal liability for their content is less than that of a publication with editorial content.

“The more you start editing and picking what can be promoted and not, the closer you come to effectively becoming an editor,” Faddoul observed.

The study calls for greater transparency in how YouTube makes its recommendations for users.

“With two billion monthly active users on YouTube, the design of the recommendation algorithm has more impact on the flow of information than the editorial boards of traditional media,” the researchers wrote.

The role of the engine is even more crucial, they maintained, when one considers the following:

  • The increasing use of YouTube as a primary source of information, particularly among youths;
  • The nearly monopolistic position of YouTube on its market; and
  • The ever-growing weaponization of YouTube to spread disinformation and partisan content around the world.

“And yet the decisions made by the recommendation engine are largely unsupervised and opaque to the public,” they pointed out.

Algorithm End Run

Adding transparency to the recommendation process would add accountability to it, Faddoul contended.

“These decisions — what should be promoted by the recommendation engine and what should not — are important. Having a process that is public facing and involving other actors besides YouTube, would improve the process,” he said.

“If you had more transparency, you’d have a more informed viewer,” remarked Cayce Myers, an assistant professor in the communications department at
Virginia Tech in Blacksburg.

Removing the platform protections for social media under the Federal Communications Decency Act of 1996 could promote tougher content policing policies by social media companies, he told TechNewsWorld.

“A lot of time company policy is driven by liability,” Myers explained, “so if you change the liability for a third-party site like YouTube, you may see a more aggressive response to cleaning up content.”

Although transparency is always a good thing, and it could help reduce the spread of harmful content, it poses some problems.

“Making your algorithms transparent will divulge technology that you don’t want your competitors to have access to,” said Vincent Raynauld, an assistant professor in the department of communication studies at
Emerson College in Boston.

What’s more, “whenever these platforms adjust their algorithms, content producers are always finding a way to bypass the algorithms or reshape their content in ways to evade their filters,” he told TechNewsWorld.

Radicalization Tool

While YouTube’s reduction of conspiratorial recommendations is encouraging, the study noted that it could not make the problem of radicalization go away.

YouTube videos reportedly have been used to groom terrorists and swing some conservatives from the right to the far right.

“In general, radicalization is a more complex problem than what an analysis of default recommendations can scope, for it involves the unique mindset and viewing patterns of a user interacting over time with an opaque multi-layer neural network tasked to pick personalized suggestions from a dynamic and virtually infinite pool of ideas,” the study observes.

“Being exposed to conspiracy content is one aspect of the radicalization process,” Raynauld said. “In order to be radicalized, you need access to this content to fuel radicalization, but I suspect there are other factors external to YouTube content at play.”


John P. Mello Jr. has been an ECT News Network reporter
since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the
Boston Phoenix, Megapixel.Net and Government
Security News
. Email John.

Source Article