M A R C H 201 9 I S S U E
The company blew it on privacy and fake news. Can it do better against trolls and racists? An exclusive embed with Facebook’s shadow government.
PHOTOGRAPHY BY BALAZS GARDI
FEBRUARY 26 , 201
BELLY OF THE BEAST
Men are scum.
You can’t say that on Facebook.
Men are pigs. You can’t say that either.
Men are trash. Men are garbage fires.
Also banned.
I |
t’s nine A.M. on an autumn Tuesday, and I’m sitting in on a meeting about “men are scum” at Facebook’s campus in Menlo Park, California. The trouble started a year ago, in the fall of 2017. The #MeToo movement had recently begun.
Nicole Silverberg, currently a writer on Full Frontal with Samantha Bee, had shared on her Facebook page a trove of bilious comments directed at her after she’d written a list of ways men “need to do better.” In the comments section beneath her post, another comic, Marcia Belsky, wrote, “Men are scum.” Facebook kicked Belsky off the platform for 30 days.
This seemed absurd. First of all, the news at the time was dominated by stories of men acting scummily. Second, Facebook had been leaving up plenty of well-documented crap. By then the company was deep into an extended cycle of bad press over the toxic, fraudulent, and/or covertly Russian content that had polluted its platform throughout the 2016 presidential campaign. In Western Europe, the far right was using the site to vilify Middle Eastern migrants. In Southeast Asia, authoritarian regimes were using it to dial up anger against religious minorities and political enemies. But Facebook was taking down “men are scum”?
Belsky hopped on a 500-person (Facebook) group of female comics. A bunch of them were getting banned for similar infractions, even as they reported sexist invective being hurled their way. They decided to protest by spamming the platform. On November 24, dozens of “men are scum” posts went up. And then … they came right down. Belsky was put back in Facebook jail. The women of Men Are Scum became a brief Internet cause célèbre, another data point in the never-ending narrative that Facebook doesn’t care about you.
Ten months later, the issue hasn’t gone away, and Facebook C.E.O. Mark Zuckerberg has let it be known within the company that he, too, is concerned by the policy. Today’s meeting is part of an ongoing attempt to solve the problem. The session takes place in Building 23, which, compared with the glorious Frank Gehry-designed offices on the other side of campus, is small and relatively nondescript. No majestic redwood trees, no High Line-inspired rooftop park. Even its signage—inspirational photo illustrations of Elie Wiesel and Malala—suggests a more innocent era, when the company’s ethos seemed more daftly utopian than sinister. The meeting room is called “Oh, Semantics.” All the rooms at Facebook have cute names like “Atticus Finch” or “Marble Rye,” after the Seinfeld episode. Mostly they seem random, but this one feels apt, because Oh, Semantics is where the company’s shadow government has been meeting, every two weeks, to debate what you can and cannot post on Facebook.
No company’s image and reputation are as strongly linked to its C.E.O. as Facebook’s. (Who in the general population can name the chief executive of Google? Of Walmart?) This is partly because the C.E.O. invented the company. It’s partly because his image as smart-ass founder was immortalized in a successful Hollywood movie. And it’s partly because he’s young and awkward and seems to make mistakes all the time. As a result, people tend to judge Facebook through the prism of Mark Zuckerberg’s gaffes and missteps. Call it the not-so-great-man theory of history.
But when it comes to figuring out how Facebook actually works—how it decides what content is allowed, and what isn’t—the most important person in the company isn’t Mark Zuckerberg. It’s Monika Bickert, a former federal prosecutor and Harvard Law School graduate. At 42, Bickert is currently one of only a handful of people, along with her counterparts at Google, with real power to dictate free-speech norms for the entire world. In Oh, Semantics, she sits at the head of a long table, joined by several dozen deputies in their 30s and 40s. Among them are engineers, lawyers, and P.R. people. But mostly they are policymakers, the people who write Facebook’s laws. Like Bickert, a number are veterans of the public sector, Obama-administration refugees eager to maintain some semblance of the pragmatism that has lost favor in Washington.
Tall and thin, with long strawberry-blond hair, Bickert sits behind a laptop decorated with a TEENAGE MUTANT NINJA TURTLES sticker. She speaks neither in guarded corporatese nor in the faux-altruistic argot particular to Silicon Valley. As a
relative newcomer to the tech industry, she regards Facebook with the ambivalence of a normal person, telling me she’s relieved her two teenage daughters are “not all about sharing everything” on social media. When I cite Facebook’s stated mission to make the world a “more open and connected place,” she literally rolls her eyes. “It’s a company. It’s a business,” she says. “Like, I am not, I guess, apologetic about the reality that we have to answer to advertisers or the public or regulators.” To her, Facebook is neither utopian nor dystopian. It’s just massively influential and, for the moment, not going anywhere.
In the wake of the 2016 election, Zuckerberg embarked on a ludicrous nationwide listening tour. While he was feeding calves in Wisconsin, Bickert and her team began methodically re-writing many of the policies that had fueled the world’s anti- Facebook animus. Last fall, Facebook agreed to show me exactly how they did it. I was granted unrestricted access to closed- door meetings like the one in Oh, Semantics, permitted to review internal deliberations on the company’s Slack-like messaging system, and provided with slide decks and Excel spreadsheets that lay out the minutiae of the new policies.
The topics discussed in Bickert’s policy meetings are almost always bummers. Today’s agenda: terrorism, non-sexual adult nudity, editing disturbing content on the pages of dead people, and, finally, “men are scum.” The issue is introduced by Mary deBree, who worked in Obama’s State Department. “We don’t want to silence people when they’re trying to raise awareness around—for example—sexual assault,” she begins. However. “However, um, the tension in this is that, if we allow more attacks on the basis of gender, this may also lead to more misogynistic content on the platform.”
Hence the dilemma.
When Facebook mass-deletes “men are scum,” it’s not thanks to top-down bias at the company, or some rogue men’s-rights Facebooker taking his stand against misandry. Nor is it a boneheaded “enforcement error” caused by one of Facebook’s 15,000 human content moderators around the world. The posts get removed because of one of Monika Bickert’s well-intentioned, though possibly doomed, policies.
At Facebook’s headquarters, in Menlo Park, the company’s team of experts—including veterans of the Obama administration—are working to identify and combat hate speech. Andy O’Connell, who is helping to design a new “supreme court”, stands at his desk.
Facebook has a 40-page rule book listing all the things that are disallowed on the platform. They’re called Community Standards, and they were made public in full for the first time in April 2018. One of them is hate speech, which Facebook defines as an “attack” against a “protected characteristic,” such as gender, sexuality, race, or religion. And one of the most serious ways to attack someone, Facebook has decided, is to compare them to something dehumanizing.
Like: Animals that are culturally perceived as intellectually or physically inferior. Or: Filth, bacteria, disease and feces.
That means statements like “black people are monkeys” and “Koreans are the scum of the earth” are subject to removal. But then, so is “men are trash.”
See the problem? If you remove dehumanizing attacks against gender, you may block speech designed to draw attention to a social movement like #MeToo. If you allow dehumanizing attacks against gender, well, you’re allowing dehumanizing attacks against gender. And if you do that, how do you defend other “protected” groups from similar attacks?
DeBree and one of her colleagues, a China expert named David Caragliano, float a handful of fixes. Idea one: punish attacks against gender less harshly than, say, attacks against race. “Men are scum” would stay up. But so would “women are scum.” This doesn’t seem quite right.
Another idea is to treat the genders themselves differently. Caragliano cues up a slide deck. On it is a graph showing internal research that Facebook users are more upset by attacks against women than they are by attacks against men. Women would be protected against all hate speech, while men would be protected only against explicit calls for violence. “Women are scum” would be removed. “Men are scum” could stay.
Problem solved? Well … not quite. Bickert foresees another hurdle. “My instinct is not to treat the genders differently,” she tells me. “We live in a world where we now acknowledge there are many genders, not just men and women. I suspect the attacks you see are disproportionately against those genders and women, but not men.” If you create a policy based on that logic, though, “you end up in this space where it’s like, ‘Our hate-speech policy applies to everybody—except for men.’ ” Imagine how that would play.
TO UNDERSTAND HOW FACEBOOK POLICES HATE SPEECH IS TO UNDERSTAND HOW FACEBOOK’S BRAIN WORKS.
To anyone who followed the “men are scum” issue from afar, Facebook’s inaction made it look aloof. In truth, “men are scum” is a well-known and much-debated topic in Menlo Park, with improbably large implications for the governing philosophy of the platform and, thus, the Internet. For philosophical and financial reasons, Facebook was established with
one set of universally shared values. And in order to facilitate as much “sharing” as possible, no one group or individual would be treated differently from another. If you couldn’t call women “scum,” then you couldn’t call men “scum,” either.
If you take a step back, it’s kind of an idealistic way to think about the world. It’s also a classically Western, liberal way to think about the world. Give everyone an equal shot at free expression, and democracy and liberty will naturally flourish. Unfortunately, the more Facebook grows, the less democracy and liberty seem to be flourishing. Likewise, the more permissive Facebook’s platform, the more prone it is to be corrupted by trolls, bots, fake news, propaganda, and bigotry. Yet the more Facebook cracks down on that stuff, the more it looks like the company’s premise was compromised from the start.
That’s the problem with running a shadow government that seeks to regulate the speech of 2.3 billion people. Governing, by its nature, demands trade-offs. But much of the world right now is not in the mood for trade-offs. People gravitate to Facebook, in part, to live in cocoons of their own making. If Facebook has created a parallel online society for a quarter of the world to live in, the question facing Monika Bickert and her team is: What kind of society is it going to be?
In the beginning there was no shadow government at Facebook. There was just Dave. Dave Willner, Facebook’s very first rulemaker. When Willner arrived at the company, in 2008, Facebook had about 145 million monthly users. Prior to his arrival, the people deciding what was allowed on the platform were also the people answering customer-service e-mails.
Mostly, users complained that they wanted embarrassing party pics taken down. The company’s policy was essentially to take down “Hitler and naked people” plus “anything else that makes you feel uncomfortable.” Willner started clicking through 15,000 photos a day, removing things that made him uncomfortable.
That wasn’t ideal. So Willner wrote Facebook’s first constitution, a set of laws called Community Standards. He and his small brain trust generally adhered to John Stuart Mill’s seminal principle that speech should be banned only if used to stoke violence against others. This hands-off philosophy also aligned with Facebook’s self-interest. More speech equals more
users, and more users equals more ad revenue. Plus, by positioning itself as an open “platform” rather than a publisher, Facebook could not be sued for libel, like a newspaper could. False information would stay up. Only obviously toxic content like terrorist propaganda, bullying, and graphic violence—plus criminal activity, like child pornography and drug trafficking—would come down. As would hate speech.
By 2013, when Willner left Facebook, the company’s user base had grown nearly tenfold, to 1.2 billion. The company, which was expanding to include the purchases of WhatsApp and Instagram, had gotten way too big for Community Standards 1.0. That year, Bickert took over as Facebook’s content czar. A native of Southern California, Bickert spent the first phase of her career at the Justice Department in Chicago, prosecuting gang violence and public corruption. She spent the second phase at the U.S. Embassy in Bangkok, where she extradited child sex traffickers. While her primary focus was protecting kids, she also began to think more about freedom of speech, thanks to strict laws against criticizing the Thai monarchy. She was, in other words, already weighing versions of the fundamental tension—“safety” vs. “voice”—that undergirds all of Facebook’s policy decisions.
When Bickert started the job, Facebook was in panic mode. The Boston Marathon bombing had just occurred, and moderators were flagging photojournalism as graphic violence. Images of blown-off limbs, which clearly violated the policy, were being removed. And yet they were clearly newsworthy. The images were ultimately restored.
A month later, the opposite problem. The Internet began protesting violent rape jokes that weren’t being yanked from the platform. Facebook explained that its policies allowed for toxic speech that didn’t seem likely to provoke physical harm. But after several companies pulled their ads, Facebook took down the jokes and pledged to re-write its policies. What had started as an anodyne platform for people to share photos and random musings was now a media giant that an increasing share of the world relied on for news. It wasn’t so obvious anymore what content was or wasn’t inappropriate.
The differences between policing the real world and policing the Internet became manifest. “The level of context you have when you’re looking at criminal laws—there’s fact-finding, you provide evidence on both sides, you actually look at the problem in a 360-degree way,” Bickert says. “We don’t. We have a snippet of something online. We don’t know who’s behind this. Or what they’re like. So it’s a system at maximum scale with very imperfect information.”
Bickert started building out the policy team in her own image. Though she is trained as a litigator, her engaging manner and easy intellect are more redolent of a law professor. On Facebook’s campus, she hired a high-school teacher, a rape crisis counselor, a West Point counterterrorism expert, a Defense Department researcher. “I was hiring people who weren’t
coming here because they cared about Facebook necessarily,” she says, but because they believed they could exert more influence in Menlo Park than in academia or gridlocked Washington.
By 2015, Bickert had teamed up with two other Facebook executives, Ellen Silver and Guy Rosen, to police bad content in a more targeted way. Bickert would set the policies. Silver would work with content moderators to implement them. And Rosen would build proactive detection tools to get in front of them. They called themselves “the three-sided coin.” (Leave it to Facebook to name its governing structure after money.)
Thanks to Rosen’s efforts, the company got masterful at eradicating porn and terrorist propaganda, which were fairly easy for artificial intelligence to classify. The rest of the hard-to-look-at stuff—violent threats, sexual solicitation and trafficking, images of self-harm and suicide, harassment, doxing, and hate speech—was largely up to humans to catch. First, individual Facebook users had to flag content they didn’t like. Then Facebook’s moderators, working across three continents, would consult manuals prepared by Silver’s team to see if it actually violated any of Bickert’s policies.
After the 2016 election, however, this system began to feel inadequate to the enormity of the task. When I ask Bickert the biggest difference in her job now versus five years ago, she replies without hesitation. “Social climate,” she says. “Immigration into Europe. Ethnic tension. Sharply contested elections. The rise of hot speech.” Meanwhile, problems no one had anticipated, like fake news and misinformation, had also announced themselves on the platform. And that was all before the Cambridge Analytica scandal, in which Facebook was revealed to have turned over the personal information of tens of millions of users to a Trump-linked political consultancy.
Facebook’s reputation tanked. Many Silicon Valley elders, including some of the company’s early backers, went full apostate and denounced the platform. Zuckerberg wrote a 5,700-word mea culpa in which he conceded that the company may have had a negative impact on our “social infrastructure.” A bleak consensus emerged. To make money, observed former Slate
editor Jacob Weisberg, Facebook had addicted us and sold our eyeballs to advertisers. In return, it made us “selfish, disagreeable, and lonely, while corroding democracy, truth, and economic equality.”
In the midst of all the bad press, in 2017, ProPublica dropped a bombshell report that directly impugned the company’s Community Standards. Sifting through troves of leaked content-moderation manuals, ProPublica discovered a number of strange and seemingly inexplicable policies. The most alarming: “white men” were protected by the company’s hate-speech laws, while “black children” were not. After a year of damning headlines about Facebook, media coverage of the policy presumed the worst about the company’s motivations.
Bickert decided she needed to re-write Facebook’s hate-speech laws. Four years into her job, here was her moment to see if she could, in her own way, redeem the company. By fixing the problems the original “move fast and break things” generation didn’t foresee—or created in the first place—maybe Facebook’s hidden legislative body could finally make the world a better place.
It’s a cliché of Silicon Valley that tech campuses are stocked with infantilizing perks and free food. The conventional wisdom is that these things keep employees from ever leaving the premises. Another reason becomes clear when you visit Facebook: there is absolutely nothing to do within a five-mile radius of campus. Step outside One Hacker Way, in Menlo Park, and you are met with highway on one side and pungent, marshy salt ponds on the other. The whole area feels strip- mined of natural beauty. All of which makes it both obscene and highly pleasant to sit in the shade of the redwood trees that
were dug up and hauled to a woodsy outdoor gathering space on the grounds of Building 21. Facebook’s campus exists, in other words, as a physical manifestation of its business model: a privatization of the public square. And that hybrid status raises novel questions about how an individual company can—or should—regulate the speech of billions.
Of all the prohibited content on the platform, hate speech is by far the hardest to adjudicate. First, there isn’t an obvious way to define it. In the U.S., there’s no legal category for hate speech; in Europe, it’s delineated pretty specifically. Second, because people post hateful stuff for all kinds of personal and idiosyncratic reasons, there isn’t a systemic way to disincentivize it. To understand how Facebook polices hate speech is to understand how Facebook’s brain works.
I head to a small, pod-like room in Building 23 to meet with the two lead architects of the hate-speech revamp: Gaurav Upot, a seven-year veteran of the company, and David Caragliano, of the “men are scum” powwow. Caragliano begins, pulling up a slide deck. “The way I think about this is, we were in the worst of all worlds.” In one way, the old hate-speech policy was too narrow. The way it was written, you couldn’t say you wanted to “kill” a member of a protected group, like a Catholic or a Latina. But you could say you wanted to kill Catholic theologians or Latina actresses. The thinking was, if you’re being that specific, you’re almost certainly exaggerating, or joking, or maybe just upset about a crummy movie. The problem was that age, like “theologians” or “actresses,” wasn’t classified as a protected category. That’s how attacks on “black children” slipped through the net, while hating on “white men” was banned.
In a different way, the policy was also too broad. In 2017, a lot of L.G.B.T.Q. people were posting the word “dyke” on Facebook. That was deemed a slur, and was duly removed. A blind spot was exposed. Facebook, it has been observed, is able to judge content—but not intent. Matt Katsaros, a Facebook researcher who worked extensively on hate speech, cites an unexpected problem with flagging slurs. “The policy had drawn a distinction between ‘nigger’ and ‘nigga,’ ” he explains. The first was banned, the second was allowed. Makes sense. “But then we found that in Africa many use ‘nigger’ the same way people in America use ‘nigga.’ ” Back to the drawing board.
Caragliano and Upot began writing a granular policy meant to solve both these problems. A matrix of hate-speech laws from across the European Union that Caragliano prepared highlights the complexity of their endeavor—and a tacit admission that Facebook’s global standards aren’t really feasible in practice. On the y-axis of the chart are a number of hypothetical hate-
speech examples. On the x-axis are various E.U. nations’ tolerance of the speech. So, for example, “Muslims are criminals” is clearly illegal in Belgium and France, likely legal in Denmark and Italy, and likely illegal in England and Germany. (And banned by Facebook.)
Caragliano and Upot pulled input from seven Facebook departments and more than 30 academics, NGOs, and activist groups. It took them four months to finish an initial revise of the policy. When they finished, in October 2017, it looked like this: Attacks would be divided into three categories of severity. The first included calls to violence (“kill”), dehumanizing words (“scum”), and offensive visual stereotypes (depicting a Jewish person as a rat). The second included “statements of inferiority.” The third encompassed “calls for exclusion.” Slurs, meanwhile, would be quarantined in their own context- dependent mini-category.
The revamp allowed Facebook to target problematic speech more specifically. Guy Rosen’s team, for example, trained its automatic detection classifier to seek only the most severe tier of hate speech. Since then, Facebook has gone from flagging about a quarter of all hate speech before users do, to more than half, without accidentally removing proud uses of “dyke.” The new rules also enabled Facebook to better classify overlooked categories like “Catholic theologians” or “black children,” who were now protected from hateful attacks.
Here’s how it works in practice: late last year, Facebook removed several posts by Yair Netanyahu, the son of Israel’s prime minister. Netanyahu had called Palestinians “monsters” and advocated that “all the Muslims leave the land of Israel.” Palestinians and Muslims are both protected groups. “Monster” is dehumanizing, “leave” is a call for exclusion, and both are classified as hate speech. The removal was consistent with the new policy. In response, Netanyahu called Facebook “the thought police.”
All these changes, though, were happening in private. Last September, when Facebook C.O.O. Sheryl Sandberg testified before Congress about the threat of foreign interference in the upcoming midterm elections, Senator Kamala Harris took the occasion to grill her about the “black children” incident. Sandberg responded with a canned and confusing answer that reinforced the wide perception that Facebook is full of crap. “We care tremendously about civil rights,” Sandberg said. “We have worked closely with civil-rights groups to find hate speech on our platform and take it down.” Harris asked her to specify when the problematic policy had been changed. Sandberg couldn’t answer.
If Bickert was watching back in Menlo Park, she must have been beside herself. The “black children” loophole, which was at its root an operational issue, had been closed more than a year earlier. The gap between the competence of the policy wonks who write Facebook’s rules, and the high-profile executives who defend them in public, could hardly have seemed wider.
Not long after Sandberg’s testimony, at a diner in Palo Alto, I ask Bickert who her John Stuart Mill is. Louis Brandeis, she tells me, the former Supreme Court justice who helped enshrine free speech as a bedrock of 20th-century American democracy. “The Brandeis idea that you need people to be able to have the conversation, that that’s how they learn, that that’s how we develop an informed society that can govern itself—that matters, I think.”
In general, Bickert would rather not censor content that’s part of the national discourse, even when it’s blatantly hateful. A prime example: in 2017, just before Facebook began re-writing its hate-speech policies, U.S. representative Clay Higgins of Louisiana posted a message on his Facebook wall demanding that all “radicalized Islamic suspects” be executed. “Hunt them, identify them, and kill them,” he wrote. “Kill them all.” People were obviously outraged. But Facebook left the message up, as it didn’t violate the company’s hate-speech rules. (“Radicalized Islamic suspects” wasn’t a protected category.) Post- revamp, that outburst would run afoul of the Community Standards. Yet Facebook has not taken the message down, citing an exemption for “newsworthy” content. “Wouldn’t you want to be able to discuss that?” Bickert asks, when I point to Higgins’s post. “We really do want to give people room to share their political views, even when they are distasteful.”
Bickert is articulating not just the classic Facebookian position that sharing is good, but also what used to be a fairly uncontroversial idea, protected by the First Amendment, that a democratic society functions best when people have the right to say what they want, no matter how offensive. This Brandeisian principle, she fears, is eroding. “It’s scary,” she says. “When they talk to people on U.S. college campuses and ask how important freedom of speech is to you, something like 60 percent say it’s not important at all. The outrage about Facebook tends to push in the direction of taking down more speech. Fewer groups are willing to stand up for unpopular speech.”
Increasingly, real-world events are testing whether Facebook can cling to a vision of liberalism in which one set of laws
applies to everyone, or whether it needs to transition to a social-justice-minded model in which certain groups warrant more protection than others. In the wake of the Syrian-refugee crisis, for instance, negative comments about Muslim migrants began flooding Facebook pages in Europe. Some advocated for tight borders. Some were straight-up racist. The goal, as Bickert saw it, was to find a middle ground, to avoid painting charged political speech with the same brush as legitimate hate speech. In 2016, Facebook devised a fix. Immigrants would be “quasi-protected.” You couldn’t call them “scum,” but you could call for their exclusion from a country. Facebook was starting to draw lines it had never before drawn.
When the “men are scum” problem first landed on its radar, in late 2017, Facebook began to consider an even more radical step. Should some groups be protected more than others? Women more than men, say, or gay people more than straight people? “People recognize power dynamics and feel like we are tone-deaf not to address them,” Caragliano says. After the policy meeting in Oh, Semantics last fall, Caragliano and deBree spun off four separate working groups to devise a fix for “men are scum.” Over the course of the next four months, they studied 6,800 examples of gendered hate speech that had appeared on the platform. It was easy, deBree said, to find reasons to defend a statement like “men are disgusting.” But it felt wrong to let users say “gay men are disgusting” or “Chinese men are disgusting.” In the end, in late January of this year, Facebook arrived at a deflating consensus: nothing would change.
In the abstract, almost everyone on Bickert’s team favored a hate-speech policy that took into account power imbalances between different groups. But for a user base of more than two billion people, such changes proved impossible to scale. On some level, there are no “fixes” to Facebook’s problems. There are only trade-offs. Like an actual government, it seemed, the best Facebook could hope for was a bunch of half-decent compromises. And like a government, anything it did would still piss off at least half its constituents.
There is a wide assumption, not unfounded, that Facebook has a financial stake in leaving total garbage up on its site. Reams of evidence, anecdotal and scholarly, suggest that its News Feed algorithm rewards inflammatory and addictive content. Even as the company has vowed over the past year to prioritize “local,” “meaningful,” and “informative” content, a shameless British clickbait site called LadBible consistently ranks as its top publisher. Unilad.com, which is basically the same thing, is rarely far behind. So are Breitbart, TMZ, and the Daily Mail. Facebook, as Wired editor Nicholas Thompson puts it, feeds us Cheetos rather than kale.
The problem with eradicating this sort of junk food is that the News Feed rewards stuff that people click on. Paternalistically replace it with high-minded content, and you’ll lose customers. Writing in The New York Times Magazine a couple years ago, Farhad Manjoo implied that Facebook, other than making money off clicks, didn’t really have a policy agenda at all. “The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition, or seniority,” he wrote. “The News Feed team’s ultimate mission is to figure out what users want [and] to give them more of that.” And what people want, evidently, is Cheetos.
But hate speech, fascinatingly, doesn’t work like this. If you give Facebook users too much of it, they actually do go away. On one of my visits to Facebook, I go out to dinner with Matt Katsaros, the hate-speech researcher. He lives in San Francisco’s Outer Sunset, which sits on the lip of the Pacific, and is one of the last neighborhoods in the city not yet spoiled by tech zillionaires. The sleepiness of the place suits Katsaros, a 30-year-old who in his spare time works as a textile artist.
Having spent the last several years staring at hate speech on the Internet, Katsaros is probably one of the world’s experts on disturbing social-media content. (“There’s a lot of women being compared to kitchen appliances right now,” he tells me.) The emotional toll of his job has significantly decreased his own appetite for ever posting anything on Facebook. “I spend my day talking to people who say, ‘Oh, they took a picture of me and wrote faggot on top of it.’ ” That, he says, is exactly why the company has a strong interest in eradicating hate speech. Some amount of inflammatory speech revs people up. But crank the ugliness dial too far, the research shows, and people withdraw. “There’s no incentive for us to have that stuff,” he says. “People get silenced, they don’t engage, and then they go off to do something else.”
Bickert makes a similar argument. “People will say, ‘Oh, your business interests are not aligned with the safety interests of the community.’ I completely disagree with that,” she says. Not only does hate speech turn others off, but the people who post it may not be ideal moneymakers for the company. “Those people are not likely to click on an ad for shoes, you know, in the middle of their hate. The person who is looking at puppy videos is a lot more likely.”
Last year, Facebook finally started posting metrics about how much banned content it was taking down from the site. It’s impossible to know exactly how much of the bad stuff they’re removing, since many users don’t report toxic content in the
first place. (This is one reason it can be difficult to identify hate speech against persecuted minorities around the world: many users don’t consider it hate speech at all.) Still, the numbers are encouraging. From October to December of 2017, Facebook purged about 1.6 million pieces of hate speech. From July to September of 2018, that number spiked to 2.9 million, or some 30,000 pieces a day.
Facebook presumably wouldn’t be doing all this if it were actually invested in keeping its user base frothing at the mouth. Indeed, compared with early, failed social-media platforms like MySpace, Facebook is highly regulated. At the same time, it’s disturbing that the platform hosts so much toxicity in the first place. Bickert’s theory is that Facebook has gotten so big it’s come to mirror the ugliness of the rest of the world. “I’m not crazy about the level of discourse that I see online in general,” she concedes. At the same time, she resists laying all of that at the feet of Facebook’s filter bubbles. “Since the early 70s, when you measure people based on their sentiments towards opposing political parties, it’s been kind of going up like this”— her hands fly upward and outward. “So to the extent that that’s a lot of the garbage that’s on social media, that’s reflecting what’s in society.”
Maybe Bickert has a point. But there’s also a case to be made that Facebook, in trying to eradicate bad content, is locked in an unwinnable war with the world it helped create. After hastening the demise of the traditional news media by siphoning off much of its advertising revenue, then handing its platform over to whichever bottom-feeding blogs, shitposters, and fake- news profiteers could generate the most user outrage—well, of course, there’s plenty of toxic speech for the company to remove. And while in the real world—or even on more rudimentary social-media platforms like Reddit—the power of robust counter-speech can do a lot to push back against noxious commentary, on Facebook, like-minded users are herded together by the self-sorting News Feed algorithm.
I float the argument to Katsaros that, at least in the short term, the company’s financial incentives are misaligned. One way to beat back the trash on its platform would be to plow its profits into building active detection tools and hiring more content moderators. Facebook, for example, has just four full-time fact checkers in Nigeria, a country of 200 million.
“Didn’t that happen?” Katsaros asks. “The stock price took a huge hit.”
He’s right. It did. Last summer, after Facebook said it would invest in expanding its safety net, investors revolted. Which
suggests …
“Are you just saying that, like, capitalism is bad?” Katsaros asks. “Is that what you’re getting at?” He stares at me deadpan for a couple seconds, before a gallows-humor smile spreads across his face. “Yeah. Definitely.”
Next time I checked in with Katsaros, he had quit the company.
If Facebook has a commander in chief (Zuckerberg) and a legislative body (Bickert’s team), it recently decided to add a third branch of government. In November, Zuckerberg wrote a post signaling his commitment to establishing an external, independent board with the power to evaluate and override the company’s most controversial decisions. In other words, Facebook would create a Supreme Court.
The idea seemed gauzy and abstract—a Zuckerberg specialty. In fact, two people at Facebook have been planning the Supreme Court for more than a year. Andy O’Connell came to Facebook from the State Department, and Heather Moore joined from the Justice Department. Both work under Monika Bickert. O’Connell explains Zuckerberg’s motivation. “His strong view is that all the content issues are issues for the community—that they are not motivated by business interests,” he says. “But nobody believes that, of course.” The court would improve not just Facebook’s decision-making but also the “perception of legitimacy.”
The task was daunting. Nobody has ever built a judicial system for a constituency of 2.3 billion people before. How many cases would it hear? Where would the judges come from? Would they be paid? Who pays them? Do deliberations take place over patchy Skype sessions? In a gargantuan mega-chamber, like the one used by the senate in the awful Star Wars prequels?
As we discuss the idea in Building 23, O’Connell starts off by narrowing the court’s jurisdiction. “The way I’m thinking about it,” he says, “it’s either really hard calls—things of significant public interest—or places where a novel case could cause us to reconsider a long-established policy.” That, in turn, leads to questions about how to choose the cases. Does Facebook pick? Do you let the public decide? “Then you get into all the, like, Boaty McBoatface public voting problems”—the unfortunate name Twitter users selected for a British research vessel.
Moore chimes in. “And then, should that be a public decision, the way the U.S. Supreme Court makes a public decision?” Only about 4 percent of Facebook’s content is news. The rest is personal. So imagine a case involving bullying, or revenge porn. What implications would that have for the privacy of the users involved? Whether you find it heartening or terrifying
that Facebook is happily working to improve its shadow government while the sclerotic real-world version in Washington does nothing, there is something thrilling about witnessing a society being built in real time.
The week we meet, in late October, O’Connell and Moore decide to test-drive an early iteration of the court featuring a couple dozen “judges” flown in from around the world, with backgrounds in human rights, privacy, and journalism. The session takes place in an airy conference room on a relatively secluded part of campus. The case I sit in on involves a piece of inflammatory content posted in Myanmar, where Facebook has been widely blamed for abetting the ethnic cleansing of Rohingya Muslims, by failing to remove attacks against them. While Facebook won’t allow me to report on the specifics of the case, I can say that the question before the judges is this: should the court overturn Facebook’s decision not to remove the offending content, which did not technically violate the company’s hate-speech rules?
The judges break off into small panels to debate the case. As Facebook staffers introduce contextual wrinkles that make the case harder to decide, the judges struggle to weigh two of Facebook’s own stated principles. Should the platform defend the “voice” of an anti-Rohingya post? Or should it protect the “safety” of those who were threatened by it?
When the judges come together to issue a ruling, the vote is six to two to take the post down. Facebook’s decision, in the abstract, has been overturned. But given the complications of the case, nobody looks particularly satisfied.
One of the ironies of Facebook’s efforts to clean up its platform is inherent in the framing of the problem. If it’s to be a benevolent government, why is it focused almost entirely on policing users? In addition to monitoring and punishing bad behavior, shouldn’t it be incentivizing good behavior? In November, Facebook published a study by Matt Katsaros and three academics that sought to answer that question. Currently, when users post nudity or hate speech, they get a brief, automated message informing them of the violation and the removal of their content. Katsaros and his co-authors surveyed nearly 55,000 users who had received the message. Fifty-two percent felt they had not been treated fairly, while 57 percent said it
was unlikely that Facebook understood their perspective. But among those who did feel fairly treated, the likelihood of repeat violations subsided.
Based on the findings, the paper argued, Facebook should focus less on punishing haters and more on creating a system of procedural justice that users could come to respect and trust. After all, Facebook’s government may be a deliberative body, but it’s by no means a democratic one. Last year, for the first time, the company began letting users file an appeal when Facebook removed their individual posts. But now Katsaros is gone. So is his co-author Sudhir Venkatesh, who has returned to his sociology post at Columbia University after a two-year stint at Facebook.
In late January, Facebook released a few more nuggets about the composition of its Supreme Court. The first iteration would likely feature 40 paid judges, serving on a part-time basis for three-year terms. The court would have the power to overrule Facebook’s content moderators, but not to rewrite the company’s Community Standards. Little was said about the role or rights of the individual users who would be bringing their appeals to the high court.
For now, the court is in Monika Bickert’s hands. At the conclusion of the session I attend, Bickert is piped in via video conference. Several judges comment that it would be a lot easier to resolve cases if they understood the motivations behind them. Bickert nods sympathetically. Facebook’s Supreme Court will be provided with plenty of context when it decides its cases. But there are only so many appeals it can hear. “The reality is, with billions of posts every day, and millions of reports from users every day,” Bickert says, “it’s just not something that we can operationalize at this scale.”
It was an outcome that Justice Louis Brandeis, the free-speech advocate, might have predicted. Brandeis was also an outspoken anti-monopolist, and Facebook’s critics often invoke him to justify breaking up the company. Even with an independent Supreme Court, it would seem, Facebook may be too big to succeed.
A couple weeks before the midterm elections, which Facebook is not accused of bungling, Bickert is playing a game. We are walking through a handsome Palo Alto neighborhood called Professorville, adjacent to Zuckerberg’s home neighborhood of Crescent Park. It’s almost Halloween, and I’ve never seen so many extravagantly creepy yard decorations in my life. Home
after home, locked in a bourgeois arms race to rack up the most realistically undead ghouls and animatronic skeletons. Accompanying us is Ruchika Budhraja, my Facebook P.R. minder and one of Bickert’s close friends at the company. The game is: How much does that house cost?
The house we’re looking at is enormous: three stories tall, brownish with green trim, plus a wraparound veranda. Bickert appraises. “Hmmm,” she says. “Nine million.” I guess $8.25 million. Budhraja looks up the price on her phone. After a moment, the verdict. “It says four and a half,” she informs us.
What? That can’t be right. We’re standing in the middle of the most expensive real-estate market in the country. Bickert consults Zillow. “This says 12.9.” That’s more like it. Budhraja pleads no contest. “I just googled on the Internet,” she says. Bickert shakes her head. “Fake news.”
This is a chancy parlor game to be playing with a journalist. A tech executive, playfully quantifying how her industry has turned an entire metro area unaffordable. But in the moment, it doesn’t feel so tone-deaf. When Bickert marvels at the obscenity of Silicon Valley property values, you can tell it’s from a place of anthropological remove. This isn’t really her world. She may be at Facebook, but she isn’t of it.
Because she’s not a member of Facebook’s founding generation, she’s not defensive about the company’s shortcomings. And because she’s freed from delusions of tech-sector altruism, she isn’t precious about trying to make all of Facebook’s users happy. If the left wing of the Internet generally wants a safer and more sanitized Facebook, and the right wing wants a free- speech free-for-all, Bickert is clinging to an increasingly outmoded Obamian incrementalism. Lead from behind. Don’t do stupid shit. Anything more ambitious would be utopian.
“The world is too diverse,” she says. “And people see speech so differently, and safety so differently. I don’t think we’re ever going to craft the perfect set of policies where we’re like, ‘We’ve nailed it.’ I don’t think we ever will.”
And one more thing: You still can’t say, “Men are scum.”