ISSIE LAPOWSKY STEVEN LEVY BUSINESS APR 24, 2018 5:00 AM
Facebook’s newly public, 27-page community standards document reveals the hard work of balancing toxic content with free speech.
IF YOU EAT someone, do not share it on Facebook. Cannibalism videos are banned.
Same with still images of cannibalism victims, alive or dead. Unless the image is presented in a medical context with a warning that only those 18 and over can see it. But fetish content regarding cannibalism? Verboten for all ages. And not just on News Feed; it’s also a no-no on other Facebook properties like Instagram—and even Messenger.
Today, Facebook is making public virtually the entire Community Standards playbook that moderators use to determine whether comments, messages, or images posted by its 2.2 billion users violate its policy. These moves are part of Facebook’s ongoing Trust-A-Palooza effort to be more open in the face of unprecedented criticism. In doing so, the company is laying bare just how much ugliness its global content moderators deal with every day, and just how hard it is to always get it right.
“I’ve been wanting to do this for a long time,” says Monika Bickert, Facebook’s Head of Global Policy Management, of the release. Before joining the company she was a federal prosecutor and the US legal advisor for the Bangkok embassy. (She also
had the dubious pleasure of defending the company before Congress last January.) “I have actually had conversations where I talked about our standards and people said, ‘I didn’t actually realize you guys have policies.'”
Coming soon after CEO Mark Zuckerberg’s testimony in two lengthy congressional hearings earlier this month, the release is well-timed if not overdue. Zuckerberg repeatedly claimed responsibility for what is posted on Facebook, and he apologized when legislators brought up cases where his moderators (now an army of 7,500) failed to take down clearly inappropriate content, from opioid sales to hate speech in Myanmar that potentially fueled a genocide. Yet he never backed down from his belief that the company was on a path towards minimizing the problem by adding more moderators and ultimately automating the process with AI algorithms that would catch objectionable content much more quickly than moderators, who generally view content when users flag it as inappropriate.
In the interrogatory heat, he did not get a chance to lay out the nuanced approach the company takes to enforce its self- defined community standards while also encouraging free expression, including speech that some users might find objectionable. Facebook’s dilemma is that it wants to be a safe place for users without becoming a strict censor of their speech. In the document released today it explains, “We err on the side of allowing content, even when some find it objectionable, unless removing that content prevents a specific harm.”
If the devil is in the details, this 27-page guide is the devil’s work indeed. The policy represents, says Bickert, almost all of the working document given to moderators. (The exceptions involve information in areas like terrorism that might make it easier for malfeasants to evade notice.) And it is a living document—its release today will undoubtedly lead to specific criticisms that the company will consider in its ongoing revision of the policy.
The document doesn’t come with an explicit content warning, but it easily could. The nearly 8,000-word bulleted list of no- nos describes with often explicit levels of granularity how Facebook defines more than 20 different offenses, from harassment and graphic violence to false news and fake accounts. Consider the bullet point prohibiting videos showing the “tossing, rotating, or shaking of an infant too young to stand by their wrists, ankles, arms, legs, or neck.” The standards call for Facebook to provide resources to people who post “images where more than one cut of self mutilation is present on a body part and the
primary subject of the image is one or more unhealed cuts.”
Pieces of the policy have been leaked previously; in May 2017, The Guardian published hefty portions of an earlier training deck. But making the official policy public is a significant step. Facebook understands it is opening itself up to mockery over seemingly absurd distinctions like its definition of “mass murderer.” (Killing four people qualifies you; between two and three, you are a “serial murderer,” and your admirers have a bit more leeway in defending you.) But without such nitpicky details, how would its moderators be able to enforce policy? As Zuckerberg put it before Congress, “We have policies to try to make it as not subjective as possible.”
The standards are at their most exacting when it comes to hate speech, an attempt to walk the fine line between banning genuine toxic content and allowing people to vent frustrations in general. It’s okay to use hate speech when discussing hate speech, though. And you can quote hate speech to make fun of it. The difficulty of making such determinations is why the company is struggling to automate the process, though Bickert is optimistic that algorithms will soon be able to nail hate speech effectively.
By signing up you agree to our User Agreement and Privacy Policy & Cookie Statement
In addition to pulling back the curtain on content moderation, Facebook will also make it possible for people to appeal bans on individual posts; previously they’d been able to do so only for entire Pages. Starting with takedowns involving nudity or sexual activity, hate speech and graphic violence, Facebook is promising a speedy clarification and a possible reconsideration, ideally within 24 hours. EFF legal director Corynne McSherry says a faster and more transparent appeals process is not just welcome,
but necessary. “Despite the effort we’re still going to see a lot of improper take downs,” she says. “Once you are in the business of private censorship, it’s hard to draw those distinctions.”
FACEBOOK’ S COMMUNITY STANDARDS have been attracting criticism and scrutiny for years. In 2015, women expressed outrage about their breastfeeding photos being banned, and the drag queen community rose up to protest Facebook’s “real names” policy. In response, Facebook shared a previously unseen level of detail about how it polices content, answering the breastfeeding controversy, for example, with lines like, “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.”
At the time, these rules were more extensive and detailed than anything Facebook had previously shared. But three years and two confused congressional hearings later, they look awfully vague and subject to misinterpretation, especially given current concerns over both censorship and undesirable content on Facebook.
Last week, in a run-up to the release, the company allowed several reporters to sit in on a bi-weekly meeting of its Content Standards Forum. This group is responsible for setting and revising the content policy. Its members are Facebook employees, many of whom have backgrounds in public policy, the law and domain expertise in sensitive areas such as sex crimes. We weren’t allowed to share the actual conversations in the room, because they may have touched on policy issues that are in flux. But we were told this was a typical session, involving issues such as hate speech, terrorism, and sex offenders. Led by Bickert and Mary DeBree, who heads the content policy team, about 20 Facebookers in a Menlo Park conference room were joined via video by teams in DC, Dublin and other locations, including a lone participant in a New York City conference room named “Death and Co.”
The meeting covers two categories of business: “heads up” items are ones where a policy adjustment may be in order, and “recommendations” involve the results from working groups tackling specific issues, often in consultation with outside experts and interest groups.
In keeping with Facebook’s data-driven ethos, the matters under discussion are backed by considerable research into the nuances of the issues, the results of which are presented on huge screens on the side of the room. One recommendation, for example, dealt with a particularly controversial issue. The group considering it proposed four options, one of them being to leave the policy as is. After a spirited discussion, a consensus emerged that adopting the option that seemed ideal in principle would be too easily misunderstood in practice. Bickert made the ultimate ruling not to change policy. When asked whether Facebook was comfortable rendering the final verdict on such matters, Bickert answered yes, though she clarified that policy was always fluid and that she welcomed outside feedback.
Another thing that emerged from the roundtable was the acknowledgement that Facebook regards all its properties as subject to its policy enforcement, not just its main app. “The policies apply regardless of whether it’s shared with one person or many. And we do actually see reports come to us through Messenger,” she says.
From a legal perspective, Facebook is under no obligation to write or enforce any of these policies. It is protected from the consequences of its users’ speech by a provision of the 1996 act that defines social media platforms as a “safe harbor” for speech. That “Section 230” provision distinguishes Facebook from a publisher that stands behind its content. Yet Facebook knows that it must go beyond the legal minimum to keep itself from descending into a snake pit of harassment, bullying, sexual content and gun-running. There is also an increasing clamor to do away with Section 230 now that the internet startups the provision was intended to help are giants.
Ultimately, Facebook must answer to its users, and transparency is a step towards that. The true test of this experiment in openness will be how responsive Facebook is to the feedback it gets, and how closely its aspirations for content moderation match up to reality. Moreover, regulating what content is permitted is only one side of Facebook’s content challenge; similar transparency on the algorithms that determine the rankings in your News Feed can be just as important.
It is easy to mock the Jesuitical specificity of Facebook’s guidelines. Yet balancing community standards with free speech for
2.2 billion people is a monumental task—one we don’t yet know can be done at all. Bickert and Facebook believe it can. And now we can see for ourselves how they’re doing it.