YouTube contractors explain how the company's 'inadequate' guidelines lead to promotion of disturbing children's videos
A disturbing "spoof" video using characters from the Doc McStuffins cartoon. "Drink Sex Muffin" was in the title. Image via screengrab.

Public outrage at revelations of inappropriate and disturbing children's content on YouTube led the company to hire thousands more human moderators. But some former and current "raters" say the company's moderation guidelines might make it worse.


BuzzFeed News interviewed 10 former and current contractors who work as raters for YouTube and acquired documents and screenshots about the company's moderation protocols, and in their Thursday evening report cast doubt on how effective additional human moderation may be.

"These documents and interviews reveal a confusing and sometimes contradictory set of guidelines, according to raters, that asks them to promote 'high quality' videos based largely on production values, even when the content is disturbing," BuzzFeed's Davey Alba wrote. "This not only allows thousands of potentially exploitative kids’ videos to remain online, but could also be algorithmically amplifying their reach."

"In the past 10 days or so, [raters] were assigned over a hundred tasks asking them to make granular assessments about whether YouTube videos aimed at children were safe," Alba continued.

"Yesterday alone, I did 50-plus tasks on YouTube regarding kids — about seven hours’ worth," one rater said, explaining the long hours they've been asked to work in the weeks since the disturbing children's videos first came to national attention.

Though the company told BuzzFeed in a statement that raters "do not determine where content on YouTube is ranked in search results, whether content violates our community guidelines and is removed, age-restricted, or made ineligible for ads," an artificial intelligence professor at Cornell told them that the work they do does affect the algorithms.

"Since the raters [make assessments about quality], they effectively change the ‘algorithmic reach’ of the videos," Professor Bart Selman said.

Because users rarely look past the first few results on the first page of a search, a rater could effectively "block" videos by giving them low ratings, the AI expert continued. Because many tasks ask raters to rate the quality of videos that will in turn be used to "to provide a video that users will want to watch" as per YouTube rater guidelines, Selman said "raters will have a significant impact on what users will get to see."

In one screenshot, guidelines instruct raters to give a video published by a channel named "Weebl's Stuff" from four years ago the highest quality rating because the author put ample  "effort and care" into creating the animation and music for the video. The video description in the screenshot says Weebl's Stuff's video is features "a moaning ‘ahhh’ noise set to jarring music and disturbing images."

"Though YouTube said search raters don’t determine whether content violates its community guidelines, just this week," Alba wrote, "it assigned a task — a screenshot of which was provided to BuzzFeed News — asking raters to decide whether YouTube videos are suitable for nine- to 12-year-olds to watch on YouTube Kids unsupervised."

"A video is OK if most parents of kids in the 9-12 age group would be comfortable exposing their children to it; otherwise it is NOT OK," the screenshot of the instructions read. "The raters are also instructed to classify why a video would be considered 'not OK': if the video contained sexuality, violence, strong crude language, drugs, or imitation (that is, encouraging bad behavior, like dangerous pranks)."

The contractor who provided BuzzFeed the screenshot said they had never seen that sort of task before the exploitative content made headlines in their five years working for YouTube.

Two other raters told BuzzFeed there is no "official" means of reporting anything but child pornography while they work "if the task doesn’t explicitly call for it." When one "encountered a disturbing video and flagged it as unsafe, but they could not flag the disturbing host channel as a rater," and had to report it as a "regular user."

“I did so, and got a blanket ‘Thanks, we’ll look into it’ from YouTube,” the rater explained. “I don’t know if the channel was taken down, but YouTube is like a hydra: You cut off one upsetting channel, and five more have sprung up by breakfast.”

You can read the entire report on YouTube's bizarre and contradictory rating guidelines via BuzzFeed News.