The funny thing about fake s is how mind-numbingly boring it can be. Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the s of rage of their intended targets. Be they gun owners. People of color. Racists. voters. And so on.

The really tedious stuff is all the also incomplete, equally self-serving pronouncements that surround ‘fake s’. Some very visibly, a lot a lot less so.

Such as ing the election interference narrative as a “fantasy” or a “fairytale” — even now, when presented with a 37-page indictment detailing what Kremlin agents got up to (including on US soil). Or Trump continuing to bluster that n-generated fake s is itself “fake s”.

And, indeed, the social firms themselves, whose platforms have been the unwitting conduits for lots of this stuff, shaping the they release about it — in what can suspiciously like an attempt to downplay the significance and imp of malicious , because, well, that spin serves their interests.

The claim and counter claim that spread out around ‘fake s’ like an amorphous cloud of meta-fakery, as reams of additional ‘’ — some of it equally polarizing but a lot of it more subtle in its attempts to mis (for e.g., the publicly unseen ‘on background’ info routinely sent to reporters to try to invisible shape coverage in a firm’s ) — are applied in equal and opposite directions in the interests of obfuscation; using speech and/or as a form of censorship to fog the lens of public opinion.

This bottomless follow-up fodder generates yet more FUD in the fake s . Which is ironic, as well as boring, of course. But it’s also cl deliberate.

As Zeynep Tufekci has eloquently argued: “The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.”

So we also get ed to all this intentional padding, applied selectively, to defuse and derail clear lines of argument; to encourage confusion and apathy; to shift blame and buy . Bo people are less likely to call their political representatives to complain.

Truly fake s is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized — and seeking to shift what are by nature shifty sands (after all , and can be relative concepts, depending on r personal perspective/prejudices) — makes it hard for any outsider to nail this gelatinous fakery to the wall.

Why would want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy aging .

And because they’re terrified of being regulated on account of the they serve. They absolutely do not want to be treated as the equivalents to traditional outlets.

But the stakes are high indeed when democracy and the rule of are on the line. And by failing to be pro-ive about the eistential threat posed by ly accelerated , have unwittingly made the case for eternal of their global -shaping and distribution platforms louder and more compelling than ever.

*

Every gun in America is now routinely followed by a flood of Russian-linked Twitter bot activity. Eacerbating social di is the name of this game. And it’s playing out all over social media continually, not just around elections.

In the case of n meddling connected to the ’s 2016 Breit referendum, which we now know for sure existed — still without having all of the we need to quantify the ual imp, the of a parliamentary committee that’s an enquiry into fake s has accused both and of essentially ignoring requests for data and help, and doing none of the the committee asked of them.

has since said it will take a more thorough look through its archives. And has drip-fed some tidbits of additional infomation. But more than a year and a half after the vote itself, many, many questions remain.

And just this week an third party study suggested that the impact of Russian Brexit trolling was far larger than has been so far conceded by the two social firms.

The PR company that ried out this included in its report a of outstanding questions for and .

they are:

We put these questions to and .

In response, a spokeswoman pointed us to some “key points” from a previous letter it sent to the DCMS committee (emphasis hers):

In response to the Commission’s request for concerning n-funded campaign ivity conducted during the regulated period for the June 2016 Referendum (15 April to 23 June 2016), ed referendum-related on our platform during the relevant period.

Among the accounts that we have previously identified as likely funded from n sources, we have thus far identified one account—@RT_com— which promoted referendum-related during the regulated period. $1,031.99 was spent on si referendum-related ads during the regulated period

With regard to future ivity by n-funded accounts, on 26 October 2017,  announced that it would no er accept advertisements from RT and and will donate the $1.9 million that RT had spent globally on on to academic into elections and civil engagement. That decision was based on a retrospective that we initiated in the of the 2016 U.S. ial Elections and following the U.S. intelligence community’s conclusion that both RT and have attempted to interfere with the election on behalf of the n government. Accordingly, @RT_com will not be eligible to use ’s promoted products in the future.

The spokeswoman declined to provide any on-the-record in response to the specific questions.

A representative first asked to see the full study, which we sent, then failed to provide a response to the questions at all.

The PR firm behind the , 89up, makes this particular study fairly easy for them to ignore. It’s a pro-Remain organization. The was not undertaken by a group of impartial university academics. The study isn’t peer ed, and so on.

But, in an illustrative twist, if Google “89up Breit”, Google injects fresh Kremlin-backed opinions into the rch results it delivers — see the top and third result


Cl, t’s no such thing as ‘bad ’ if ’re a Kremlin node.

Even a study decrying n election meddling presents an for respinning and generating yet more FUD — in this instance by calling 89up biased because it supported the staying in the . Making it easy for n state organs to slur the as worthless.

The social firms aren’t making that point in public. They don’t have to. That argument is being made for them by an entity whose former brand name was literally Today’. Fake s thrives on shamelessness, cl.

It also very cl thrives in the limbo of fuzzy accountability w and journas essentially have to scream at social firms until blue in the face to get even partial answers to perfectly able questions.

Frankly, this situation is ing increasingly unsustainable.

Not least because governments are cottoning on — some are setting up departments to monitor malicious and even drafting anti-fake news election laws.

And while the social firms have been a bit more alacritous to respond to domestic makers’ requests for ion and investigation into political , that just makes their wider inion, when viable and able concerns are brought to them by non-US and concerned individuals, all the more inecusable.

The user-bases of , and Tube are global. Their businesses generate revenue globally. And the societal imps from maliciously minded disd on their platforms can be very keenly felt outside the US too.

But if giants have treated requests for and help about political from the — a close US ally — so poorly, can how unresponsive and/or unreachable these companies are to further flung nations, with fewer or zero ties to the land.

Earlier this month, in what ed very much like an of easperation, the chair of the ’s fake s enquiry, ian Collins, flew his committee over the Atlantic to question , and Google staffers in an evidence session in Washington.

None of the companies sent their CEOs to face the committee’s questions. None provided a substantial amount of . The full imp of ’s meddling in the Breit vote remains unquantified.

One problem is fake s. The problem is the lack of incentive for social companies to robustly investigate fake s.

*

The partial about ’s Breit dis-ops, which and have trickled out so far, like blood from the proverbial stone, is unhelpful ely because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake s to churn out more maliciously minded , as RT and demonstrably have.

In all probability, it also pours more fuel on Breit-based societal di. The , like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the . What role did social and Kremlin agents play in eacerbating those dis? Without hard it’s very difficult to say.

But, at the end of the day, it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one ecept the Kremlin and the social firms themselves are in a position to judge.

And no one in their right mind would now suggest we swallow ’s line that so called fake s is a sicked up by over-imaginative Russophobes.

But social firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their ions towards equivocation and obfuscation.

Self interest also compellingly eplains how poorly they have handled this problem to date; and why they continue — even now — to impede investigations by not disclosing enough and/or failing to interrogate deeply enough their own systems when asked to respond to able requests.

A game of ‘uncertain claim vs self-interested counter claim’, as competing interests de it out to try to land a knock-out blow in the game of ‘fake s and/or total ’, serves no useful purpose in a civilized society. It’s just more FUD for the fake s mill.

Especially as this stuff really isn’t rocket . nature is nature. And has been shown to have a more potent influencing impact than truthful when the two are presented side by side. (As they frequently are by and on .) So could do robust math on fake s — if only had access to the underlying .

But only the have that. And they’re not falling over themselves to share it. Instead, routinely rubbishes third party studies ely because eternal ers don’t have full visibility into how its systems shape and dis .

Yet eternal ers don’t have that visibility because pr them from seeing how it shapes tweet flow. Tin lies the rub.

Yes, some of the platforms in the firing line have taken some preventative ions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (f checkers).

has also built some anti-fake s to try to tweak what its algorithms , though nothing it’s done on that front to date s very fully (even as a more change to its Feed, to make it less of a s feed, has had a unilateral and aging imp on the visibility of genuine s organizations’ — so is arguably going to be unhelpful in ucing -fueled ).

In an instance, ’s mass closing of what it described as “fake accounts” ahead of, for eample, the UK and French elections can also problematic, in democratic terms, because we don’t fully know how it identified the particular “tens of thousands” of accounts to close. Nor what they had been sharing prior to this. Nor why it hadn’t closed them before if they were indeed Kremlin -spreading .

More recently, has said it will implement a disclosure system for political ads, including posting a snail mail postcard to entities wishing to pay for political on its platform — to try to verify they are indeed located in the territory they say they are.

Yet its own of ads has admitted that n efforts to spread are ongoing and persistent, and do not solely target elections or

The wider point is that social di is itself a tool for imping democracy and elections — so if want to achieve ongoing political meddling that’s the game play.

don’t just up r guns ahead of a particular election. to worry away at society’s weak points continuously to fray tempers and raise tensions.

Elections don’t take place in a vacuum. And if people are angry and divided in their daily lives then that will naturally be reflected in the choices made at the ballot bo, whenever t’s an election.

knows this. And that’s why the Kremlin has been playing such a game. Why it’s not just targeting elections. Its targets are fault lines in the fabric of society — be it gun control vs gun owners or conservatives vs liberals or people of color vs white supremacists — whatever issues it can seize on to stir up trouble and rip away at the social fabric.

That’s what makes ly amplified  an eistential threat to democracy and to civilized societies. Nothing on this scale has been possible before.

And it’s thanks, in great part, to the reach and power of that this game is being played so effectively — because these platforms have historically prefer to champion free speech rather than root out and eradicate hate speech and ; inviting trolls and malicious s to eploit the freedom afforded by their free speech ideology and to turn powerful broadcast and -targeting platforms into cyberweapons that blast the free societies that created them.

Social ’s filtering and sorting algorithms also crucially failed to make any distinction between and . Which was their great eistential error of judgement, as they sought to eschew editorial responsibility while simultaneously ing to dominate and crush traditional outlets which do operate within a more tightly regulated (and, at least in some instances, have a civic mission to truthfully inform).

Publishers have their own biases too, of course, but those biases tend to be writ large — vs ’ fau claims of ntrality when in f their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and over and above truthful but less clickable .

But if r platform treats everything and almost anything indiscriminately as ‘’, then don’t be surprised if fake s becomes indistinguishable from the genuine article because ’ve built a system that allows sewage and potable water to flow through the same distribution pipe.

So it’s interesting to see Gman’s suggested answer to social ’s eistential fake s problem attempting, even now, to deflect blame — by arguing that the US education system should take on the burden of arming citizens to deconstruct all the dubious nonsense that are piping into people’s eyeballs.

Lessons in critical are certainly a good idea. But fakes are compelling for a . at the tena with which theories take h in the US. In short, it would take a very and a very large investment in critical education programs to create any kind of shielding intellectual capa able to protect the population at large from being fooled by maliciously crafted fakes.

Indeed, nature ively s against critical . Fakes are more compelling, more clickable than the real thing. And thanks to nology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth. Left unchecked, this problem is going to get eistentially worse too.

So, no, education can’t fi this on its own. And for to try to imply it can is yet more misdirection and blame shifting.

*

If ’re the target of malicious ’ll very likely find the compelling because the is crafted with r specific likes and dislikes in mind. , for eample, r trigger reion to being sent a deepfake of r wife in bed with r best friend.

That’s what makes this innation of so potent and insidious vs forms of malicious (of course has a very long history — but never in history have we had such powerful distribution platforms that are simultaneously global in reach and capable of delivering individually targeted campaigns. That’s the cru of the shift ).

Fake s is also insidious because of the lack of civic rests on agents, which makes maliciously minded fake s so much more potent and problematic than plain .

I mean, even people who’ve rched for ‘slippers’ an awful lot of s, because they really buying slippers, are probably only in the market for buying one or two pairs a year — no matter how many adverts for slippers serves them. They’re also probably unlikely to ively evangelize their slipper preferences to their , family and wider society — by, for eample, posting about their slipper-based views on their social feeds and/or engaging in slipper-based discussions around the dinner table or even attending pro-slipper rallies.

And even if they did, they’d have to be a very charismatic individual indeed to generate much interest and influence. Because, well, slippers are boring. They’re not a polarizing product. T aren’t tribes of slipper owners as t are buyers. Because slippers are a non-comple, functional comfort item with minimal imp. So an individual’s slipper preferences, even if very liberally put about on social , are unlikely to generate strong opinions or reions either way.

Political opinions and political positions are an matter. They are frequently what define us as individuals. They are also what can divide us as a society, sadly.

To put it an way, political opinions are not slippers. People ly try a one on for size. Yet social firms spent a very indeed trying to sell the ludicrous fallacy that about slippers and maliciously crafted political , mass-targeted tlessly and inepensively via their ad platforms, was essentially the same stuff. See: Zuckerberg’s infamous “pretty crazy idea” comment, for eample.

Indeed, back over the last few years’ s about fake s, and have demonstrably sought to play down the idea that the disd via their platforms might have had any sort of quantifiable imp on the democratic process at all.

Yet these are the same firms that make — very large amounts of , in some cases — by selling their capability to influentially target .

So they have essentially tried to claim that it’s only when foreign entities engage with their platforms, and used their — not to sell slippers or a Netfli subscription but to press people’s biases and prejudices in order to sew social di and imp democratic outcomes — that, all of a sudden, these powerful cease to function.

And we’re supposed to take it on trust from the same self-interested companies that the unknown quantity of malicious ads being fenced on their platforms is but a teeny tiny drop in the overall ocean they’re serving up so hey why can’t just stop overreing?

That’s also pure misdirection of course. The wider problem with malicious is it pervades all on these platforms. Malicious paid-for ads are just the tip of the iceberg.

So sure, the Kremlin didn’t spend very much paying Twitter and Facebook for Breit ads — because it didn’t need to. It could (and did) freely set up ranks of bot accounts on their platforms to tweet and share created by RT, for eample — frequently skewed towards promoting the Leave campaign, according to multiple third party studies — amplifying the reach and imp of its without having to send the firms any more checks.

And indeed, is still operating ranks of on social which are ively ing to divide public opinion, as freely admits.

Maliciously minded has also been shown to be prefer by (for eample) ’s or Google’s algorithms vs truthful , because their systems have been tuned to what’s most clickable and shareable and can also be all too easily gamed.

And, despite their ongoing ie efforts to fi what they view as some kind of -sorting problem, their algorithms continue to get caught and called out for promoting dubious stuff.

Thing is, this kind of dynamic, contetual judgement is very hard for AI — as Zuckerberg himself has conceded. But is unthinkable. giants simply do not want to employ the numbers of s that would be necessary to always be making the right editorial call on each and every piece of .

If they did, they’d instantly become the organizations in the world — needing at least hunds of thousands (if not millions) of ted journas to serve every market and local region they cover.

They would also instantly invite as publishers — ergo, back to the regulatory nightmare they’re so desperate to avoid.

All of this is why fake s is an eistential problem for social .

And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

Little wonder, then, that these firms are now so fied on trying to narrow the and concern to focus specifically on political . Rather than malicious in general.

Because if sit and think about the full scope of malicious , coupled with the automated global distribution platforms that social has become, it soon becomes clear this problem scales as big and wide as the platforms themselves.

And at that point only two solutions viable:

A) bespoke , including regulatory access to proprietary algorithmic -sorting engines.

B) breaking up big so none of these platforms have the reach and power to enable mass-manipulation.

The threat posed by info-cyberwarfare on platforms that straddle entire societies and have become attention-sapping powers — swapping out editorially structu s distribution for machine-powe hierarchies that lack any kind of civic mission — is really only just beginning to become clear, as the detail of s and misuses slowly emerges. And as certain ages are felt.

’s user base is a staggering two billion+ at this point — way bigger than the population of the world’s most populous country, China. Google’s Tube has over a billion users. Which the company points out amounts to more than a third of the entire user-base of the Internet.

What does this seismic shift in distribution and consumption mean for societies and democracies? We can hazard guesses but we’re not in a position to know without much better access to tightly guarded, commercially controlled streams.

Really, the case for social is ting to unstoppable.

But even with unfette access to internal and the potential to control -sifting engines, how do fi a problem that scales so very big and broad?

Regulating such massive, global platforms would cl not be easy. In some is so dominant it essentially is the Internet.

So, again, this problem s eistential. And Zuck’s 2018 challenge is more Sisyphean than Herculean.

And it might well be that competition concerns are not the only trigger-call for big tech to get broken up this year.

Read more: https://techcrunch.com

Related posts