Across town, a group of senior Facebook executives, including COO Sheryl Sandberg and vice president of global communications Elliot Schrage, had set up a temporary headquarters near the base of the mountain where Thomas Mann put his fictional sanatorium. The world’s biggest companies often establish receiving rooms at the world’s biggest elite confab, but this year Facebook’s pavilion wasn’t the usual scene of airy bonhomie. It was more like a bunker—one that saw a succession of tense meetings with the same tycoons, ministers, and journalists who had nodded along to Soros’ broadside.
Over the previous year Facebook’s stock had gone up as usual, but its reputation was rapidly sinking toward junk bond status. The world had learned how Russian intelligence operatives used the platform to manipulate US voters. Genocidal monks in Myanmar and a despot in the Philippines had taken a liking to the platform. Mid-level employees at the company were getting both crankier and more empowered, and critics everywhere were arguing that Facebook’s tools fostered tribalism and outrage. That argument gained credence with every utterance of Donald Trump, who had arrived in Davos that morning, the outrageous tribalist skunk at the globalists’ garden party.
CEO Mark Zuckerberg had recently pledged to spend 2018 trying to fix Facebook. But even the company’s nascent attempts to reform itself were being scrutinized as a possible declaration of war on the institutions of democracy. Earlier that month Facebook had unveiled a major change to its News Feed rankings to favor what the company called “meaningful social interactions.” news Feed is the core of Facebook—the central stream through which flow baby pictures, press reports, New Age koans, and Russian-made memes showing Satan endorsing Hillary Clinton. The changes would favor interactions between friends, which meant, among other things, that they would disfavor stories published by Media companies. The company promised, though, that the blow would be softened somewhat for local news and publications that scored high on a user-driven metric of “trustworthiness.”
Davos provided a first chance for many Media executives to confront Facebook’s leaders about these changes. And so, one by one, testy publishers and editors trudged down Davos Platz to Facebook’s headquarters throughout the week, ice cleats attached to their boots, seeking clarity. Facebook had become a capricious, godlike force in the lives of news organizations; it fed them about a third of their referral traffic while devouring a greater and greater share of the advertising revenue the Media industry relies on. And now this. Why? Why would a company beset by fake news stick a knIfe into real news? And what would Facebook’s algorithm deem trustworthy? Would the Media executives even get to see their own scores?
Facebook didn’t have ready answers to all of these questions; certainly not ones it wanted to give. The last one in particular—about trustworthiness scores—quickly inspired a heated debate among the company’s executives at Davos and their colleagues in Menlo Park. Some leaders, including Schrage, wanted to tell publishers their scores. It was only fair. Also in agreement was Campbell Brown, the company’s chief liaison with news publishers, whose job description includes absorbing some of the impact when Facebook and the news industry crash into one another.
But the engineers and product managers back at home in CalIfornia said it was folly. Adam Mosseri, then head of news Feed, argued in emails that publishers would Game the system If they knew their scores. Plus, they were too unsophisticated to understand the methodology, and the scores would constantly change anyway. To make matters worse, the company didn’t yet have a reliable measure of trustworthiness at hand.
Heated emails flew back and forth between Switzerland and Menlo Park. Solutions were proposed and shot down. It was a classic Facebook dilemma. The company’s algorithms embraid choices so complex and interdependent that it’s hard for any human to get a handle on it all. If you explain some of what is happening, people get confused. They also tend to obsess over tiny factors in huge equations. So in this case, as in so many others over the years, Facebook chose opacity. Nothing would be revealed in Davos, and nothing would be revealed afterward. The Media execs would walk away unsatisfied.
After Soros’ speech that Thursday night, those same editors and publishers headed back to their hotels, many to write, edit, or at least read all the news pouring out about the billionaire’s tirade. The words “their days are numbered” appeared in article after article. The next day, Sandberg sent an email to Schrage asking If he knew whether Soros had shorted Facebook’s stock.
Far from Davos, meanwhile, Facebook’s product engineers got down to the precise, algorithmic business of implementing Zuckerberg’s vision. If you want to promote trustworthy news for billions of people, you first have to specIfy what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of dIfferent publishers. To define news, the engineers pulled a classIfication system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”
That particular choice, which meant the algorithm would be less kind to all kinds of other news—from health and science to technology and sports—wasn’t something Facebook execs discussed with Media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either. When one Facebook executive learned about it recently in a briefing with a lower-level engineer, they say they “nearly fell on the fucking floor.”
The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis, based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shIfts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success.
Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy If you include people on Instagram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York Times, The Guardian, and Britain’s Channel 4 news came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica, and they had some questions.
This much Facebook knew in the early months of 2018. The company also knew—because everyone knew—that Cambridge Analytica had gone on to work with the Trump campaign after Ted Cruz dropped out of the race. And some people at Facebook worried that the story of their company’s relationship with Cambridge Analytica was not over. One former Facebook communications official remembers being warned by a manager in the summer of 2017 that unresolved elements of the Cambridge Analytica story remained a grave vulnerability. No one at Facebook, however, knew exactly when or where the unexploded ordnance would go off. “The company doesn’t know yet what it doesn’t know yet,” the manager said. (The manager now denies saying so.)
The company first heard in late February that the Times and The Guardian had a story coming, but the department in charge of formulating a response was a house divided. In the fall, Facebook had hired a brilliant but fiery veteran of tech industry PR named Rachel Whetstone. She’d come over from Uber to run communications for Facebook’s WhatsApp, Instagram, and Messenger. Soon she was traveling with Zuckerberg for public events, joining Sandberg’s senior management meetings, and making decisions—like picking which outside public relations firms to cut or retain—that normally would have rested with those officially in charge of Facebook’s 300-person communications shop. The staff quickly sorted into fans and haters.
And so it was that a confused and fractious communications team huddled with management to debate how to respond to the Times and Guardian reporters. The standard approach would have been to correct misinformation or errors and spin the company’s side of the story. Facebook ultimately chose another tack. It would front-run the press: dump a bunch of information out in public on the eve of the stories’ publication, hoping to upstage them. It’s a tactic with a short-term benefit but a long-term cost. Investigative journalists are like pit bulls. Kick them once and they’ll never trust you again.
Facebook’s decision to take that risk, according to multiple people involved, was a close call. But on the night of Friday, March 16, the company announced it was suspending Cambridge Analytica from its platform. This was a fateful choice. “It’s why the Times hates us,” one senior executive says. Another communications official says, “For the last year, I’ve had to talk to reporters worried that we were going to front-run them. It’s the worst. Whatever the calculus, it wasn’t worth it.”
The tactic also didn’t work. The next day the story—focused on a charismatic whistle-blower with pink hair named Christopher Wylie—exploded in Europe and the United States. Wylie, a former Cambridge Analytica employee, was claiming that the company had not deleted the data it had taken from Facebook and that it may have used that data to swing the American presidential election. The first sentence of The Guardian’s reporting blared that this was “one of the tech giant’s biggest ever data breaches” and that Cambridge Analytica had used the data “to build a powerful Software program to predict and influence choices at the ballot box.”
The story was a witch’s brew of Russian operatives, privacy violations, confusing data, and Donald Trump. It touched on nearly all the fraught issues of the moment. Politicians called for regulation; users called for boycotts. In a day, Facebook lost $36 billion in its market cap. Because many of its employees were compensated based on the stock’s performance, the drop did not go unnoticed in Menlo Park.
To this emotional story, Facebook had a programmer’s rational response. Nearly every fact in The Guardian’s opening paragraph was misleading, its leaders believed. The company hadn’t been breached—an academic had fairly downloaded data with permission and then unfairly handed it off. And the Software that Cambridge Analytica built was not powerful, nor could it predict or influence choices at the ballot box.
But none of that mattered. When a Facebook executive named Alex Stamos tried on Twitter to argue that the word breach was being misused, he was swatted down. He soon deleted his tweets. His position was right, but who cares? If someone points a gun at you and holds up a sign that says hand’s up, you shouldn’t worry about the apostrophe. The story was the first of many to illuminate one of the central ironies of Facebook’s struggles. The company’s algorithms helped sustain a news ecosystem that prioritizes outrage, and that news ecosystem was learning to direct outrage at Facebook.
As the story spread, the company started melting down. Former employees remember scenes of chaos, with exhausted executives slipping in and out of Zuckerberg’s private conference room, known as the Aquarium, and Sandberg’s conference room, whose name, Only Good news, seemed increasingly incongruous. One employee remembers cans and snack wrappers everywhere; the door to the Aquarium would crack open and you could see people with their heads in their hands and feel the warmth from all the body heat. After saying too much before the story ran, the company said too little afterward. Senior managers begged Sandberg and Zuckerberg to publicly confront the issue. Both remained publicly silent.
“We had hundreds of reporters flooding our inboxes, and we had nothing to tell them,” says a member of the communications staff at the time. “I remember walking to one of the cafeterias and overhearing other Facebookers say, ‘Why aren’t we saying anything? Why is nothing happening?’ ”
According to numerous people who were involved, many factors contributed to Facebook’s baffling decision to stay mute for five days. Executives didn’t want a repeat of Zuckerberg’s ignominious performance after the 2016 election when, mostly off the cuff, he had proclaimed it “a pretty crazy idea” to think fake news had affected the result. And they continued to believe people would figure out that Cambridge Analytica’s data had been useless. According to one executive, “You can just buy all this fucking stuff, all this data, from the third-party ad Networks that are tracking you all over the planet. You can get way, way, way more privacy-violating data from all these data brokers than you could by stealing it from Facebook.”
“Those five days were very, very long,” says Sandberg, who now acknowledges the delay was a mistake. The company became paralyzed, she says, because it didn’t know all the facts; it thought Cambridge Analytica had deleted the data. And it didn’t have a specIfic problem to fix. The loose privacy policies that allowed Kogan to collect so much data had been tightened years before. “We didn’t know how to respond in a system of imperfect information,” she says.
Facebook’s other problem was that it didn’t understand the wealth of antipathy that had built up against it over the previous two years. Its prime decisionmakers had run the same playbook successfully for a decade and a half: Do what they thought was best for the platform’s growth (often at the expense of user privacy), apologize If someone complained, and keep pushing forward. Or, as the old slogan went: Move fast and break things. Now the public thought Facebook had broken Western democracy. This privacy violation—unlike the many others before it—wasn’t one that people would simply get over.
Finally, on Wednesday, the company decided Zuckerberg should give a Television interview. After snubbing CBS and PBS, the company summoned a CNN reporter who the communications staff trusted to be reasonably kind. The network’s camera crews were treated like potential spies, and one communications official remembers being required to monitor them even when they went to the bathroom. (Facebook now says this was not company protocol.) In the interview itself, Zuckerberg apologized. But he was also specIfic: There would be audits and much more restrictive rules for anyone wanting access to Facebook data. Facebook would build a tool to let users know If their data had ended up with Cambridge Analytica. And he pledged that Facebook would make sure this kind of debacle never happened again.
A flurry of other interviews followed. That Wednesday, WIRED was given a quiet heads-up that we’d get to chat with Zuckerberg in the late afternoon. At about 4:45 pm, his communications chief rang to say he would be calling at 5. In that interview, Zuckerberg apologized again. But he brightened when he turned to one of the topics that, according to people close to him, truly engaged his imagination: using AI to keep humans from polluting Facebook. This was less a response to the Cambridge Analytica scandal than to the backlog of accUSAtions, gathering since 2016, that Facebook had become a cesspool of toxic virality, but it was a problem he actually enjoyed figuring out how to solve. He didn’t think that AI could completely eliminate hate speech or nudity or spam, but it could get close. “My understanding with food safety is there’s a certain amount of dust that can get into the chicken as it’s going through the processing, and it’s not a large amount—it needs to be a very small amount,” he told WIRED.
The interviews were just the warmup for Zuckerberg’s next gauntlet: A set of public, televised appearances in April before three congressional committees to answer questions about Cambridge Analytica and months of other scandals. Congresspeople had been calling on him to testIfy for about a year, and he’d successfully avoided them. Now it was Game time, and much of Facebook was terrIfied about how it would go.
As it turned out, most of the lawmakers proved astonishingly uninformed, and the CEO spent most of the day ably swatting back soft pitches. Back home, some Facebook employees stood in their cubicles and cheered. When a plodding Senator Orrin Hatch asked how, exactly, Facebook made money while offering its services for free, Zuckerberg responded confidently, “Senator, we run ads,” a phrase that was soon emblazoned on T-shirts in Menlo Park.
Then Sandberg and Zuckerberg began making a huge show of hiring humans to keep watch over the platform. Soon you couldn’t listen to a briefing or meet an executive without being told about the tens of thoUSAnds of content moderators who had joined the company. By the end of 2018, about 30,000 people were working on safety and security, which is roughly the number of newsroom employees at all the newspapers in the United States. Of those, about 15,000 are content reviewers, mostly contractors, employed at more than 20 giant review factories around the world.
Facebook was also working hard to create clear rules for enforcing its basic policies, effectively writing a constitution for the 1.5 billion daily users of the platform. The instructions for moderating hate speech alone run to more than 200 pages. Moderators must undergo 80 hours of training before they can start. Among other things, they must be fluent in emoji; they study, for example, a document showing that a crown, roses, and dollar signs might mean a pimp is offering up prostitutes. About 100 people across the company meet every other Tuesday to review the policies. A similar group meets every Friday to review content policy enforcement screwups, like when, as happened in early July, the company flagged the Declaration of Independence as hate speech.
The company hired all of these people in no small part because of pressure from its critics. It was also the company’s fate, however, that the same critics discovered that moderating content on Facebook can be a miserable, soul-scorching job. As Casey Newton reported in an investigation for the Verge, the average content moderator in a Facebook contractor’s outpost in Arizona makes $28,000 per year, and many of them say they have developed PTSD-like symptoms due to their work. Others have spent so much time looking through conspiracy theories that they’ve become believers themselves.
Ultimately, Facebook knows that the job will have to be done primarily by machines—which is the company’s preference anyway. Machines can browse porn all day without flatlining, and they haven’t learned to unionize yet. And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artIficial intelligence systems that can, at scale, identIfy the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines. An even trickier goal was to identIfy the stuff that Facebook wants to demote but not eliminate—like misleading clickbait crap. Over the past several years, the core AI team at Facebook has doubled in size annually.
Even a basic machine-learning system can pretty reliably identIfy and block pornography or images of graphic violence. Hate speech is much harder. A sentence can be hateful or prideful depending on who says it. “You not my Bitch, then Bitch you are done,” could be a death threat, an inspiration, or a lyric from Cardi B. Imagine trying to decode a similarly complex line in Spanish, Mandarin, or Burmese. False news is equally tricky. Facebook doesn’t want lies or bull on the platform. But it knows that truth can be a kaleidoscope. Well-meaning people get things wrong on the internet; malevolent actors sometimes get things right.
Schroepfer’s job was to get Facebook’s AI up to snuff on catching even these devilishly ambiguous forms of content. With each category the tools and the success rate vary. But the basic technique is roughly the same: You need a collection of data that has been categorized, and then you need to train the machines on it. For spam and nudity these databases already exist, created by hand in more innocent days when the threats online were fake Viagra and Goatse memes, not Vladimir Putin and Nazis. In the other categories you need to construct the labeled data sets yourself—ideally without hiring an army of humans to do so.
One idea Schroepfer discussed enthusiastically with WIRED involved starting off with just a few examples of content identIfied by humans as hate speech and then using AI to generate similar content and simultaneously label it. Like a scientist bioengineering both rodents and rat terriers, this approach would use Software to both create and identIfy ever-more-complex slurs, insults, and racist crap. Eventually the terriers, specially trained on superpowered rats, could be set loose across all of Facebook.
The company’s efforts in AI that screens content were nowhere roughly three years ago. But Facebook quickly found success in classIfying spam and posts supporting terror. Now more than 99 percent of content created in those categories is identIfied before any human on the platform flags it. Sex, as in the rest of human lIfe, is more complicated. The success rate for identIfying nudity is 96 percent. Hate speech is even tougher: Facebook finds just 52 percent before users do.
These are the kinds of problems that Facebook executives love to talk about. They involve math and logic, and the people who work at the company are some of the most logical you’ll ever meet. But Cambridge Analytica was mostly a privacy scandal. Facebook’s most visible response to it was to amp up content moderation aimed at keeping the platform safe and civil. Yet sometimes the two big values involved—privacy and civility—come into opposition. If you give people ways to keep their data completely secret, you also create secret tunnels where rats can scurry around undetected.
In other words, every choice involves a trade-off, and every trade-off means some value has been spurned. And every value that you spurn—particularly when you’re Facebook in 2018—means that a hammer is going to come down on your head.
Instagram, which was founded in 2010 by Kevin Systrom and Mike Krieger, had been acquired by Facebook in 2012 for $1 billion. The price at the time seemed ludicrously high: That much money for a company with 13 employees? Soon the price would seem ludicrously low: A mere billion dollars for the fastest-growing social network in the world? Internally, Facebook at first watched Instagram’s relentless growth with pride. But, according to some, pride turned to suspicion as the pupil’s success matched and then surpassed the professor’s.
Systrom’s glowing press coverage didn’t help. In 2014, according to someone directly involved, Zuckerberg ordered that no other executives should sit for magazine profiles without his or Sandberg’s approval. Some people involved remember this as a move to make it harder for rivals to find employees to poach; others remember it as a direct effort to contain Systrom. Top executives at Facebook also believed that Instagram’s growth was cannibalizing the Blue App. In 2017, Cox’s team showed data to senior executives suggesting that people were sharing less inside the Blue App in part because of Instagram. To some people, this sounded like they were simply presenting a problem to solve. Others were stunned and took it as a sign that management at Facebook cared more about the product they had birthed than one they had adopted.
Most of Instagram—and some of Facebook too—hated the idea that the growth of the photo-sharing app could be seen, in any way, as trouble. Yes, people were using the Blue App less and Instagram more. But that didn’t mean Instagram was poaching users. Maybe people leaving the Blue App would have spent their time on Snapchat or watching Netflix or mowing their lawns. And If Instagram was growing quickly, maybe it was because the product was good? Instagram had its problems—bullying, shaming, FOMO, propaganda, corrupt micro-influencers—but its internal architecture had helped it avoid some of the demons that haunted the industry. Posts are hard to reshare, which slows virality. External links are harder to embed, which keeps the fake-news providers away. Minimalist Design also minimized problems. For years, Systrom and Krieger took pride in keeping Instagram free of hamburgers: icons made of three horizontal lines in the corner of a screen that open a menu. Facebook has hamburgers, and other menus, all over the place.
Systrom and Krieger had also seemingly anticipated the techlash ahead of their colleagues up the road in Menlo Park. Even before Trump’s election, Instagram had made fighting toxic comments its top priority, and it had rolled out an AI filtering system in June 2017. By the spring of 2018, the company was working on a product to alert users that “you’re all caught up” when they’d seen all the new posts in their feed. In other words, “put your damn phone down and talk to your friends.” That may be a counterintuitive way to grow, but earning goodwill does help over the long run. And sacrIficing growth for other goals wasn’t Facebook’s style at all.
By the time the Cambridge Analytica scandal hit, Systrom and Krieger, according to people familiar with their thinking, were already worried that Zuckerberg was souring on them. They had been allowed to run their company reasonably independently for six years, but now Zuckerberg was exerting more control and making more requests. When conversations about the reorganization began, the Instagram founders pushed to bring in Mosseri. They liked him, and they viewed him as the most trustworthy member of Zuckerberg’s inner circle. He had a Design background and a mathematical mind. They were losing autonomy, so they might as well get the most trusted emissary from the mothership. Or as Lyndon Johnson said about J. Edgar Hoover, “It’s probably better to have him inside the tent pissing out than outside the tent pissing in.”
Meanwhile, the founders of WhatsApp, Brian Acton and Jan Koum, had moved outside of Facebook’s tent and commenced fire. Zuckerberg had bought the encrypted messaging platform in 2014 for $19 billion, but the cultures had never entirely meshed. The two sides couldn’t agree on how to make money—WhatsApp’s end-to-end encryption wasn’t originally Designed to support targeted ads—and they had other dIfferences as well. WhatsApp insisted on having its own conference rooms, and, in the perfect metaphor for the two companies’ diverging attitudes over privacy, WhatsApp employees had special bathroom stalls Designed with doors that went down to the floor, unlike the standard ones used by the rest of Facebook.
Eventually the battles became too much for Acton and Koum, who had also come to believe that Facebook no longer intended to leave them alone. Acton quit and started funding a competing messaging platform called Signal. During the Cambridge Analytica scandal, he tweeted, “It is time. #deleteFacebook.” Soon afterward, Koum, who held a seat on Facebook’s board, announced that he too was quitting, to play more Ultimate Frisbee and work on his collection of air-cooled Porsches.
The departure of the WhatsApp founders created a brief spasm of bad press. But now Acton and Koum were gone, Mosseri was in place, and Cox was running all three messaging platforms. And that meant Facebook could truly pursue its most ambitious and important idea of 2018: bringing all those platforms together into something new.
At WIRED, the month after an image of a bruised Zuckerberg appeared on the cover, the numbers were even more stark. One day, traffic from Facebook suddenly dropped by 90 percent, and for four weeks it stayed there. After protestations, emails, and a raised eyebrow or two about the coincidence, Facebook finally got to the bottom of it. An ad run by a liquor advertiser, targeted at WIRED readers, had been mistakenly categorized as engagement bait by the platform. In response, the algorithm had let all the air out of WIRED’s tires. The publication could post whatever it wanted, but few would read it. Once the error was identIfied, traffic soared back. It was a reminder that journalists are just sharecroppers on Facebook’s giant farm. And sometimes conditions on the farm can change without warning.
Inside Facebook, of course, it was not surprising that traffic to publishers went down after the pivot to “meaningful social interactions.” That outcome was the point. It meant people would be spending more time on posts created by their friends and family, the genuinely unique content that Facebook offers. According to multiple Facebook employees, a handful of executives considered it a small plus, too, that the news industry was feeling a little pain after all its negative coverage. The company denies this—“no one at Facebook is rooting against the news industry,” says Anne Kornblut, the company’s director of news partnerships—but, in any case, by early May the pain seemed to have become perhaps excessive. A number of stories appeared in the press about the damage done by the algorithmic changes. And so Sheryl Sandberg, who colleagues say often responds with agitation to negative news stories, sent an email on May 7 calling a meeting of her top lieutenants.
That kicked off a wide-ranging conversation that ensued over the next two months. The key question was whether the company should introduce new factors into its algorithm to help serious publications. The product team working on news wanted Facebook to increase the amount of public content—things shared by news organizations, businesses, celebrities—allowed in news Feed. They also wanted the company to provide stronger boosts to publishers deemed trustworthy, and they suggested the company hire a large team of human curators to elevate the highest-quality news inside of news Feed. The company discussed setting up a new section on the app entirely for news and directed a team to quietly work on developing it; one of the team’s ambitions was to try to build a competitor to Apple news.
Some of the company’s most senior execs, notably Chris Cox, agreed that Facebook needed to give serious publishers a leg up. Others pushed back, especially Joel Kaplan, a former deputy chief of staff to George W. Bush who was now Facebook’s vice president of global public policy. Supporting high-quality outlets would inevitably make it look like the platform was supporting liberals, which could lead to trouble in Washington, a town run mainly by conservatives. Breitbart and the Daily Caller, Kaplan argued, deserved protections too. At the end of the climactic meeting, on July 9, Zuckerberg sided with Kaplan and announced that he was tabling the decision about adding ways to boost publishers, effectively killing the plan. To one person involved in the meeting, it seemed like a sign of shIfting power. Cox had lost and Kaplan had won. Either way, Facebook’s overall traffic to news organizations continued to plummet.
Meanwhile, the dynamics inside the communications department had gotten even worse. Elliot Schrage had announced that he was going to leave his post as VP of global communications. So the company had begun looking for his replacement; it focused on interviewing candidates from the political world, including Denis McDonough and Lisa Monaco, former senior officials in the Obama administration. But Rachel Whetstone also declared that she wanted the job. At least two other executives said they would quit If she got it.
The need for leadership in communications only became more apparent on July 11, when John Hegeman, the new head of news Feed, was asked in an interview why the company didn’t ban Alex Jones’ InfoWars from the platform. The honest answer would probably have been to just admit that Facebook gives a rather wide berth to the far right because it’s so worried about being called liberal. Hegeman, though, went with the following: “We created Facebook to be a place where dIfferent people can have a voice. And dIfferent publishers have very dIfferent points of view.”
This, predictably, didn’t go over well with the segments of the news Media that actually try to tell the truth and that have never, as Alex Jones has done, reported that the children massacred at Sandy Hook were actors. Public fury ensued. Most of Facebook didn’t want to respond. But Whetstone decided it was worth a try. She took to the @Facebook account—which one executive involved in the decision called “a big fucking marshmallow we shouldn’t ever use like this”—and started tweeting at the company’s critics.
“Sorry you feel that way,” she typed to one, and explained that, instead of banning pages that peddle false information, Facebook demotes them. The tweet was very quickly ratioed, a Twitter term of art for a statement that no one likes and that receives more comments than retweets. Whetstone, as @Facebook, also declared that just as many pages on the left pump out misinformation as on the right. That tweet got badly ratioed too.
Five days later, Zuckerberg sat down for an interview with Kara Swisher, the influential editor of Recode. Whetstone was in charge of prep. Before Zuckerberg headed to the microphone, Whetstone supplied him with a list of rough talking points, including one that inexplicably violated the first rule of American civic discourse: Don’t invoke the Holocaust while trying to make a nuanced point.
About 20 minutes into the interview, while ambling through his answer to a question about Alex Jones, Zuckerberg declared, “I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down, because I think there are things that dIfferent people get wrong. I don’t think that they’re intentionally getting it wrong.” Sometimes, Zuckerberg added, he himself makes errors in public statements.
The comment was absurd: People who deny that the Holocaust happened generally aren’t just slipping up in the midst of a good-faith intellectual disagreement. They’re spreading anti-Semitic hate—intentionally. Soon the company announced that it had taken a closer look at Jones’ activity on the platform and had finally chosen to ban him. His past sins, Facebook decided, had crossed into the domain of standards violations.
Eventually another candidate for the top PR job was brought into the headquarters in Menlo Park: Nick Clegg, former deputy prime minister of the uk. Perhaps in an effort to disguise himself—or perhaps because he had decided to go aggressively Silicon Valley casual—he showed up in jeans, sneakers, and an untucked shirt. His interviews must have gone better than his disguise, though, as he was hired over the luminaries from Washington. “What makes him incredibly well qualIfied,” said Caryn Marooney, the company’s VP of communications, “is that he helped run a country.”
Zuckerberg wanted to include a line to this effect in his script for the call. Whetstone counseled him not to, or at least to temper it with praise for Instagram’s founding team. In the end, Zuckerberg’s script declared, “We believe Instagram has been able to use Facebook’s infrastructure to grow more than twice as quickly as it would have on its own. A big congratulations to the Instagram team—and to all the teams across our company that have contributed to this success.”
After the call—with its payload of bad news about growth and investment—Facebook’s stock dropped by nearly 20 percent. But Zuckerberg didn’t forget about Instagram. A few days later he asked his head of growth, Javier Olivan, to draw up a list of all the ways Facebook supported Instagram: running ads for it on the Blue App; including link-backs when someone posted a photo on Instagram and then cross-published it in Facebook news Feed; allowing Instagram to access a new user’s Facebook connections in order to recommend people to follow. Once he had the list, Zuckerberg conveyed to Instagram’s leaders that he was pulling away the supports. Facebook had given Instagram servers, health insurance, and the best engineers in the world. Now Instagram was just being asked to give a little back—and to help seal off the vents that were allowing people to leak away from the Blue App.
Systrom soon posted a memo to his entire staff explaining Zuckerberg’s decision to turn off supports for traffic to Instagram. He disagreed with the move, but he was committed to the changes and was telling his staff that they had to go along. The memo “was like a flame going up inside the company,” a former senior manager says. The document also enraged Facebook, which was terrIfied it would leak. Systrom soon departed on paternity leave.
The tensions didn’t let up. In the middle of August, Facebook prototyped a location-tracking service inside of Instagram, the kind of privacy intrusion that Instagram’s management team had long resisted. In August, a hamburger menu appeared. “It felt very personal,” says a senior Instagram employee who spent the month implementing the changes. It felt particularly wrong, the employee says, because Facebook is a data-driven company, and the data strongly suggested that Instagram’s growth was good for everyone.
Friends of Systrom and Krieger say the strIfe was wearing on the founders too. According to someone who heard the conversation, Systrom openly wondered whether Zuckerberg was treating him the way Donald Trump was treating Jeff Sessions: making lIfe miserable in hopes that he’d quit without having to be fired. Instagram’s managers also believed that Facebook was being miserly about their budget. In past years they had been able to almost double their number of engineers. In the summer of 2018 they were told that their growth rate would drop to less than half of that.
When it was time for Systrom to return from paternity leave, the two founders decided to make the leave permanent. They made the decision quickly, but it was far from impulsive. According to someone familiar with their thinking, their unhappiness with Facebook stemmed from tensions that had brewed over many years and had boiled over in the past six months.
And so, on a Monday morning, Systrom and Krieger went into Chris Cox’s office and told him the news. Systrom and Krieger then notIfied their team about the decision. Somehow the information reached Mike Isaac, a reporter at The New York Times, before it reached the communications teams for either Facebook or Instagram. The story appeared online a few hours later, as Instagram’s head of communications was on a Flight circling above New York City.
After the announcement, Systrom and Krieger decided to play nice. Soon there was a lovely photograph of the two founders smiling next to Mosseri, the obvious choice to replace them. And then they headed off into the unknown to take time off, decompress, and figure out what comes next. Systrom and Krieger told friends they both wanted to get back into coding after so many years away from it. If you need a new job, it’s good to learn how to code.
On September 27, Kavanaugh appeared before the Senate Judiciary Committee after four hours of wrenching recollections by his primary accuser, Christine Blasey Ford. Laura Cox Kaplan sat right behind him as the hearing descended into rage and recrimination. Joel Kaplan sat one row back, stoic and thoughtful, directly in view of the cameras broadcasting the scene to the world.
Kaplan isn’t widely known outside of Facebook. But he’s not anonymous, and he wasn’t wearing a fake mustache. As Kavanaugh testIfied, journalists started tweeting a screenshot of the tableau. At a meeting in Menlo Park, executives passed around a phone showing one of these tweets and stared, mouths agape. None of them knew Kaplan was going to be there. The man who was supposed to smooth over Facebook’s political dramas had inserted the company right into the middle of one.
Kaplan had long been friends with Sandberg; they’d even dated as undergraduates at Harvard. But despite rumors to the contrary, he had told neither her nor Zuckerberg that he would be at the hearing, much less that he would be sitting in the gallery of supporters behind the star witness. “He’s too smart to do that,” one executive who works with him says. “That way, Joel gets to go. Facebook gets to remind people that it employs Republicans. Sheryl gets to be shocked. And Mark gets to denounce it.”
If that was the plan, it worked to perfection. Soon Facebook’s internal message boards were lighting up with employees mortIfied at what Kaplan had done. Management’s initial response was limp and lame: A communications officer told the staff that Kaplan attended the hearing as part of a planned day off in his personal capacity. That wasn’t a good move. Someone visited the human resources portal and noted that he hadn’t filed to take the day off.
The hearings were on a Thursday. A week and a day later, Facebook called an all-hands to discuss what had happened. The giant cafeteria in Facebook’s headquarters was cleared to create space for a town hall. Hundreds of chairs were arranged with three aisles to accommodate people with questions and comments. Most of them were from Women who came forward to recount their own experiences of sexual assault, harassment, and abuse.
Zuckerberg, Sandberg, and other members of management were standing on the right side of the stage, facing the audience and the moderator. Whenever a question was asked of one of them, they would stand up and take the mic. Kaplan appeared via video conference looking, according to one viewer, like a hostage trying to smile while his captors stood just offscreen. Another participant described him as “looking like someone had just shot his dog in the face.” This participant added, “I don’t think there was a single male participant, except for Zuckerberg looking down and sad onstage and Kaplan looking dumbfounded on the screen.”
Employees who watched expressed dIfferent emotions. Some felt empowered and moved by the voices of Women in a company where top management is overwhelmingly male. Another said, “My eyes rolled to the back of my head” watching people make specIfic personnel demands of Zuckerberg, including that Kaplan undergo sensitivity training. For much of the staff, it was cathartic. Facebook was finally reckoning, in a way, with the #MeToo movement and the profound bias toward men in Silicon Valley. For others it all seemed ludicrous, narcissistic, and emblematic of the liberal, politically correct bubble that the company occupies. A guy had sat in silence to support his best friend who had been nominated to the Supreme Court; as a consequence, he needed to be publicly flogged?
In the days after the hearings, Facebook organized small group discussions, led by managers, in which 10 or so people got together to discuss the issue. There were tears, grievances, emotions, debate. “It was a really bizarre confluence of a lot of issues that were popped in the zit that was the SCOTus hearing,” one participant says. Kaplan, though, seemed to have moved on. The day after his appearance on the conference call, he hosted a party to celebrate Kavanaugh’s lIfetime appointment. Some colleagues were aghast. According to one who had taken his side during the town hall, this was a step too far. That was “just spiking the football,” they said. Sandberg was more forgiving. “It’s his house,” she told WIRED. “That is a very dIfferent decision than sitting at a public hearing.”
In a year during which Facebook made endless errors, Kaplan’s insertion of the company into a political maelstrom seemed like one of the clumsiest. But in retrospect, Facebook executives aren’t sure that Kaplan did lasting harm. His blunder opened up a series of useful conversations in a workplace that had long focused more on coding than inclusion. Also, according to another executive, the episode and the press that followed surely helped appease the company’s would-be regulators. It’s useful to remind the Republicans who run most of Washington that Facebook isn’t staffed entirely by snowflakes and libs.
In September, however, the news team managed to convince Zuckerberg to start administering ice water to the parched executives of the news industry. That month, Tom Alison, one of the team’s leaders, circulated a document to most of Facebook’s senior managers; it began by proclaiming that, on news, “we lack clear strategy and alignment.”