YouTube is shedding new light on how it moderates its sprawling video platform, which gets billions of views every day.
On Tuesday, the company first released a statistic called "violative view rate," a new data point that YouTube plans to include in its Community Guidelines enforcement reports. Basically, for every 10,000 views on his social network – or at least in the last quarter of 2020 – about 16 to 18 of those views are videos that violate YouTube's rules, which currently prohibit everything from Hate speech to medical misinformation about Covid-19 to spam
In a blog post published Tuesday, YouTube claims these statistics are a sign of progress, sharing that its "violent view rate" has fallen 70 percent since 2017 thanks to improvements the company has made to its content moderation-focused artificial intelligence. “We've made a tremendous amount of progress, and it's a very, very low number,” said Jennifer Flannery O'Connor, YouTube's director of product management for trust and security, “but of course we want it to be lower, and that's what my team works day in and day out to try to do. "
YouTube shared this new information as politicians and users became increasingly concerned about how tech companies are moderating their platforms amid an 'infodemic' misinformation about Covid-19 and after the Capitol uprising and a cycle of presidential elections last year. by conspiracy theories.
At the same time, YouTube's stats on violent content reinforce a story some YouTube executives have promoted in the past: that his systems generally pick up bad content, and that the problem of nefarious videos on his site generally relatively smallYouTube also said on Tuesday that it can remove 94 percent of content that breaks the rules with automated tagging systems, and that the vast majority of those videos are caught before they have been viewed 10 times. In total, YouTube claims it has removed more than 83 million videos since it began publishing enforcement transparency reports three years ago.
“We have a big denominator, which means we have a lot of content,” CEO Susan Wojcicki told Recode in 2019. "Looking at it, what were all the news and concerns and stories about this fractional one percent."
But the numbers YouTube released on Tuesday have limitations. Here's how it calculated them: YouTube takes a sample of a number of views, that is, cases where a user watches a particular video (YouTube did not release the number of videos included in this metric). YouTube then watches the videos that get these views and sends them to the content reviewers. They study all the videos and find out which ones violate the company's rules so that YouTube can produce an estimated percentage of views for "violent videos".
Keep in mind that YouTube's own reviewers, not independent auditors, decide what constitutes a violation of YouTube's guidelines. While Facebook is committed last year an independent audit Flannery O'Connor said on Monday that the video platform had yet to make a similar commitment.
YouTube is often slow to decide what types of controversial content to ban. The platform only changed its policies from hate speech to ban neo-Nazi and Holocaust denial in 2019. While researchers have been warning for years against the spread of the right-wing conspiracy theory QAnon, YouTube only banned "content targeting an individual or group with conspiracy theories used to justify real violence" in October last year.
There's also a lot of content that YouTube won't take down – and doesn't violate the company's rules – but remains on the line, and some critics feel shouldn't be allowed on the platform. YouTube sometimes calls these kinds of controversial videos'borderline contentIt's hard to research how common this borderline content is, given the sheer size of YouTube. But we know it is there. The company tracked videos with false election information and only expanded the harassment and hatred rules to ban content targeting groups and people with conspiracy theories used to justify violence, namely QAnon, in October last year
An important example of YouTube not outright removing offensive and harmful content came in 2019 when YouTube faced outrage after the company decided to ditch content from conservative YouTuber Steven Crowder, including racist and homophobic harassment of the then Vox. journalist Carlos Maza (under intense pressure, YouTube eventually took away Crowder & # 39; s ability to show ads). Later that year, Wojcicki told creators who "represent (p) roblematic content a fraction of one percent of the content on YouTube," but had a "massive impact".
YouTube is removing ads for creators who post content that conflicts with that of the platform monetization rules, and it lowers the borderline content, but YouTube doesn't release comparable statistics on how common this type of content is or how many views it typically gets.
As to why YouTube is now releasing this particular statistic, Flannery O'Connor said the company had been using the song internally for a number of years to track YouTube's advancements in safety and spikes in viewing violent videos. and to set goals for his machine learning team. “We felt (it is) best to just be transparent and use the same metrics internally and externally,” she said.
YouTube's announcement is part of a broader pattern of social media companies saying that their platforms are in fact not dominated by nefarious content – while critics, researchers and journalists continue to point out the sheer number of views and clicks such content often draws. Even when YouTube removes these videos, they have at times already managed to share malicious ideas spreading from the platform – for example, the Plandemic video, which spread fake Covid-19 conspiracies last year, has millions of views on got the platform. before it was removed
Open source is powered by Omidyar Network. All open source content is editorially independent and produced by our journalists.