YouTube's war on content aimed at 'manipulating, harassing or harming' children

"Our goal is to stay one step ahead."
"Our goal is to stay one step ahead." Photo: Shutterstock

YouTube has announced plans to hire many more human moderators to flag disturbing and extremist content this week, acknowledging that software alone won't solve many of the problems plaguing the industry.

YouTube, which is owned by Google, said it would significantly increase the number of people monitoring such content across the company next year. By 2018, Google will employ roughly 10,000 content moderators and other professionals tasked with addressing violations of its content policies. The search giant would not disclose the number of employees currently performing such jobs, but a person familiar with the company's operations said the hiring represents a 25 percent increase from where Google is now.

The move follows a series of reports last month in which YouTube videos surfaced showing children in disturbing and potentially exploitative situations, including being duct-taped to walls, mock-abducted and forced into washing machines, according to a BuzzFeed report.

Google said it removed 150,000 violent extremist videos since June. The company also has removed hundreds of thousands more videos that showed content that was exploiting or endangering children, said the person familiar with the company's thinking. Some disturbing content appeared on YouTube Kids, the company's app that is marketed toward children.

"I've seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm," said YouTube chief executive Susan Wojcicki in the post. "Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube."

Google's decision to police publishers more aggressively comes at a moment when Silicon Valley companies are wrestling with how to get a handle on unwanted content across a host of areas, from violent videos appearing on Facebook Live, to hate speech, terrorism, and Russian disinformation campaigns attempting to thwart the political debate.

Last month, amid Capitol Hill hearings on Russian meddling, Facebook said it would hire an additional 10,000 security professionals to tackle political disinformation and other security threats. Earlier this year, the company said it would hire 3,000 more content moderators on top of the 4,500 it already had.

Google says that many of these content flaggers, many of whom are low-level contractors while others are subject matter experts, will work to train computer algorithms to identify and thwart unwanted content. 98 percent of violent extremist videos removed are now flagged by Google's software, up from 76 percent in August, the company said in its blog. The company's advances in data mining software, known as machine learning, now enable Google to take down nearly 70 of such content within 8 hours of upload.

Google and its Silicon Valley counterparts have said they hope to train software to do most of the policing work. But the hiring spree this year demonstrates that a lot of undesirable content is slipping through the cracks and that more humans are necessary, said Paul Barrett, Deputy Director of the Stern Center for Business and Human Rights at New York University.


"Companies are recognising that they are going to have to dig into their pocketbooks and pay more people if they are going to get their arms around this problem," Barrett said. He stressed that software would be a major part of any solution. "Given the volume of material on these sites, it's almost impossible to have the response be entirely human," he added.

Technology platforms are not legally required to police most content posted by third parties on their services, thanks to a 30-year-old law, Section 230 of the Communications Decency Act, which grants intermediaries immunity from liability. Child pornography, however, has been treated as an exception to Section 230 and is an area that the companies aggressively police, particularly as they make forays into new services that are aimed at children.

"Child abuse is one topic where they feel very comfortable acting aggressively to police their terrain in a focused way," Barrett said. "My sense is that they going to have to get just as focused on Russia as they are on child abuse. If they don't, we will see a repeat in 2018 of what took place in 2016."

Washington Post