Why the fight against disinformation, sham accounts and trolls won’t be any easier in 2020

2020 Election

The big tech companies have announced aggressive steps to keep trolls, bots and online fakery from marring another presidential — from ’s removal of billions of fake accounts to ’s spurning of political ads.

But it’s a never-ending of whack-a-mole that’s only getting harder as we barrel toward the . peddlers are deploying new, more subversive techniques and American operatives have adopted some of the deceptive tactics Russians tapped in 2016. , tech companies face thorny and sometimes subjective choices about how to combat them — at times drawing flak from both Democrats and Republicans as a result.

This is our of some of the evolving challenges Silicon Valley as it tries to counter online lies and actors heading into the 2020 election cycle:

1) American trolls be a greater threat than Russians

Russia-backed trolls notoriously flooded social with disinformation around the in 2016, in what ’s investigators described as a multimillion-dollar plot involving years of planning, hundreds of people and a wave of fake accounts posting and ads on like Facebook, Twitter and -owned .

This around — as experts have warned — a growing share of the threat is likely to originate in America.

“It’s likely that be a high volume of and disinformation pegged to the 2020 election, with the majority of it being generated right here in the , as opposed to coming from overseas,” said Barrett, deputy of ’s Stern Center for Business and Rights.

Barrett, the of a recent report on 2020 disinformation, noted that lies and misleading claims about 2020 candidates originating in the . have already spread across social media. Those sex scandals involving South Bend, Ind., and Sen. Warren (D-.) and a smear campaign calling Sen. Kamala Harris (D-Calif.) “not an American black” because of her multiracial heritage. (The latter claim got a boost on Twitter from .)

Before last year’s midterm , Americans similarly amplified fake such as a “#nomenmidterms” hashtag that urged liberal men to stay from the polls to make “a Woman’s Vote Worth more.” Twitter suspended at least one person — James Woods — for retweeting that .

“A lot of the disinformation that we can identify tends to be domestic,” said Nahema Marchal, a researcher at the Oxford Internet Institute’s Computational Project. “Just regular private citizens leveraging the playbook, if will, to create … a divisive , or just mixing factual reality with made-up .”

Tech companies say they’ve broadened their fight against disinformation as a result. Facebook, for instance, announced in October that it had expanded its policies against “coordinated inauthentic ” to reflect a rise in disinformation campaigns run by non-state actors, domestic groups and companies. But people tracking the spread of fakery say it remains a problem, especially inside closed groups like those on Facebook.

2) And policing domestic is tricky

U.S. law forbids foreigners from taking part in American political campaigns — a fact that made it easy for members of Congress to criticize Facebook for accepting rubles as payment for political ads in 2016.

But Americans are allowed, even encouraged, to partake in their own democracy — which makes things a lot more complicated when they use social media to try to skew the electoral process. For one thing, the companies face a technical challenge: Domestic meddling doesn’t leave obvious markers such as ads written in broken English and traced back to Russian internet addresses.

More fundamentally, there’s often no clear line between bad- meddling and dirty . It’s not illegal to run a mud-slinging campaign or engage in unscrupulous electioneering. And the tech companies are wary of being seen as infringing on American’s right to engage in political speech — all the more so as such as Donald Trump accuse them of silencing their voices.

Plus, the line between foreign and domestic can be blurry. Even in 2016, the Kremlin-backed farm known as the Internet Research Agency relied on Americans to boost their disinformation. Now, claims with hazy origins are being picked up without need for a coordinated 2016- foreign campaign. Simon Rosenberg, a longtime Democratic strategist who has spent recent years focused on online disinformation, points to Trump’s promotion of the theory that significantly meddled in the 2016 U.S. election, a charge that some experts trace back to Russian .

“It’s hard to know if something is foreign or domestic,” said Rosenberg, once it “gets swept up in this vast ‘Wizard of Oz’-like noise machine.”

3) Bad actors are

Experts agree on one thing: The tactics that encounter in 2020 will look different from those they’ve trying to fend off since 2016.

“What we’re going to see is the continued and development of new approaches, new experimentation trying to see what will work and what won’t,” said Lee Foster, who leads the operations analysis team at the firm FireEye.

Foster said the “underlying motivations” of undermining democratic institutions and casting doubt on election results will remain constant, but the trolls have already evolved their tactics.

For instance, they’ve gotten better at obscuring their online activity to avoid automatic detection, even as social media platforms ramp up their use of software to dismantle bot and eradicate inauthentic accounts.

“One of the challenges for the platforms is that, on the one hand, the understandably demands more from them about how they take down or identify state-sponsored attacks or how they take down these big networks of authentic accounts, but at the same time they can’t reveal too much at the risk of playing into bad actors’ hands,” said Oxford’s Marchal.

Researchers have already observed extensive efforts to distribute disinformation through user-generated posts — known as “organic” content — than the ads or paid messages that were prominent in the 2016 disinformation campaigns.

Foster, for example, cited trolls impersonating or more reliable figures to give disinformation greater legitimacy. And Marchal noted a rise in the use of and doctored videos, whose origins can be difficult to track down. Jesse Littlewood, at group Common Cause, said social media posts aimed at voter suppression frequently appear no different from ordinary people sharing election updates in good faith — messages such as “you can text your vote” or “the election’s a different day” that can be “quite harmful.”

Tech companies insist they are learning, too. Since the 2016 election, Google, Facebook and Twitter have devoted security experts and engineers to tackling disinformation in national elections across the globe, including the 2018 midterms in the United States. The companies say they have gotten better at detecting and removing fake accounts, particularly those engaged in coordinated campaigns.

But other tactics may have escaped detection so far. NYU’s Barrett noted that disinformation-for-hire operations sometimes employed by corporations may be ripe for use in U.S. politics, if they’re not already.

He pointed to a recent experiment conducted by the cyber threat intelligence firm Recorded Future, which said it paid two shadowy Russian “threat actors” a total of just $6,050 to generate media campaigns promoting and trashing a fictitious company. Barrett said the project was intended “to lure out of the shadows firms that are willing to do this kind of work,” and demonstrated how easy it is to generate and sow disinformation.

Real- examples include a hyper-partisan skewed news operation started by a former executive and Facebook’s accusations that an Israeli social media company profited from creating hundreds of fake accounts. That “shows that there are firms out there that are willing and eager to engage in this kind of underhanded activity,” Barrett said.

4) Not all lies are created equal

Facebook, Twitter and YouTube are largely united in trying to take down certain kinds of false information, such as targeted attempts to drive down voter turnout. But their enforcement has been more varied when it comes to material that is arguably misleading.

In some cases, the companies label the material factually dubious or use their algorithms to limit its spread. But in the lead-up to 2020, the companies’ rules are being tested by political candidates and government leaders who sometimes play fast and loose with .

“A lot of the mainstream campaigns and themselves to rely on a mix of fact and ,” Marchal said. “It’s often a lot of … things that contain a kernel of truth but have been distorted.”

One example is the flap over a Trump campaign — which appeared on Facebook, YouTube and some networks — suggesting that former Vice President Joe Biden had pressured Ukraine into firing a to squelch an into an company whose board included Biden’s son Hunter. In fact, the administration and multiple U.S. allies had pushed for removing the prosecutor for slow-walking corruption investigations. The ad “relies on speculation and unsupported accusations to mislead viewers,” the nonpartisan site FactCheck.org concluded.

The has put tech companies at the center of a tug of war in Washington. Republicans have argued for more permissive rules to safeguard constitutionally protected political speech, while Democrats have called for greater limits on politicians’ lies.

Democrats have especially lambasted Facebook for refusing to fact-check political ads, and have criticized Twitter for letting politicians in their and Google for limiting candidates’ ability to finely tune the reach of their advertising — all examples, the Democrats say, of Silicon Valley ducking the fight against deception.

Jesse Blumenthal, who leads the tech arm of the Koch-backed Stand Together coalition, said expecting Silicon Valley to play truth cop an undue burden on tech companies to litigate messy disputes over what’s factual.

“Most of the time the calls are going to be subjective, so what they end up doing is putting the platforms at the center of this rather than politicians being at the center of this,” he said.

Further complicating matters, have generally granted politicians considerably more leeway to spread lies and half-truths through their individual accounts and in certain instances through political ads. “We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying,” Facebook Mark Zuckerberg said in an October speech at Georgetown University in which he defended his company’s policy.

But Democrats say tech companies shouldn’t profit off false political messaging.

supportive of these social media companies taking a much harder line on what content they allow in terms of political ads and calling out lies that are in political ads, recognizing that that’s not always the easiest thing to draw those distinctions,” Democratic Rep. Pramila Jayapal of told .

originally published on POLITICO Magazine

Related posts