Why the fight against disinformation, sham accounts and trolls won’t be any easier in 2020

2020 Election

The big tech companies have announced aggressive steps to keep trolls, bots and online fakery from marring another presidential election — from Facebook’s removal of billions of fake accounts to Twitter’s spurning of all political ads.

But it’s a never-ending game of whack-a-mole that’s only getting harder as we barrel toward the 2020 election. Disinformation peddlers are deploying new, more subversive techniques and American operatives have adopted some of the deceptive tactics Russians tapped in 2016. Now, tech companies face thorny and sometimes subjective choices about how to combat them — at times drawing flak from both Democrats and Republicans as a result.

This is our roundup of some of the evolving challenges Silicon Valley faces as it tries to counter online lies and bad actors heading into the 2020 election cycle:

1) American trolls may be a greater threat than Russians

Russia-backed trolls notoriously flooded social media with disinformation around the presidential election in 2016, in what Robert Mueller’s investigators described as a multimillion-dollar plot involving years of planning, hundreds of people and a wave of fake accounts posting news and ads on platforms like Facebook, Twitter and Google-owned YouTube.

This time around — as experts have warned — a growing share of the threat is likely to originate in America.

“It’s likely that there will be a high volume of misinformation and disinformation pegged to the 2020 election, with the majority of it being generated right here in the United States, as opposed to coming from overseas,” said Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights.

Barrett, the author of a recent report on 2020 disinformation, noted that lies and misleading claims about 2020 candidates originating in the U.S. have already spread across social media. Those include manufactured sex scandals involving South Bend, Ind., Mayor Pete Buttigieg and Sen. Elizabeth Warren (D-Mass.) and a smear campaign calling Sen. Kamala Harris (D-Calif.) “not an American black” because of her multiracial heritage. (The latter claim got a boost on Twitter from Donald Trump Jr.)

Before last year’s midterm elections, Americans similarly amplified fake messages such as a “#nomenmidterms” hashtag that urged liberal men to stay home from the polls to make “a Woman’s Vote Worth more.” Twitter suspended at least one person — actor James Woods — for retweeting that message.

“A lot of the disinformation that we can identify tends to be domestic,” said Nahema Marchal, a researcher at the Oxford Internet Institute’s Computational Propaganda Project. “Just regular private citizens leveraging the Russian playbook, if you will, to create … a divisive narrative, or just mixing factual reality with made-up facts.”

Tech companies say they’ve broadened their fight against disinformation as a result. Facebook, for instance, announced in October that it had expanded its policies against “coordinated inauthentic behavior” to reflect a rise in disinformation campaigns run by non-state actors, domestic groups and companies. But people tracking the spread of fakery say it remains a problem, especially inside closed groups like those popular on Facebook.

2) And policing domestic content is tricky

U.S. law forbids foreigners from taking part in American political campaigns — a fact that made it easy for members of Congress to criticize Facebook for accepting rubles as payment for political ads in 2016.

But Americans are allowed, even encouraged, to partake in their own democracy — which makes things a lot more complicated when they use social media tools to try to skew the electoral process. For one thing, the companies face a technical challenge: Domestic meddling doesn’t leave obvious markers such as ads written in broken English and traced back to Russian internet addresses.

More fundamentally, there’s often no clear line between bad-faith meddling and dirty politics. It’s not illegal to run a mud-slinging campaign or engage in unscrupulous electioneering. And the tech companies are wary of being seen as infringing on American’s right to engage in political speech — all the more so as conservatives such as President Donald Trump accuse them of silencing their voices.

Plus, the line between foreign and domestic can be blurry. Even in 2016, the Kremlin-backed troll farm known as the Internet Research Agency relied on Americans to boost their disinformation. Now, claims with hazy origins are being picked up without need for a coordinated 2016-style foreign campaign. Simon Rosenberg, a longtime Democratic strategist who has spent recent years focused on online disinformation, points to Trump’s promotion of the theory that Ukraine significantly meddled in the 2016 U.S. election, a charge that some experts trace back to Russian security forces.

“It’s hard to know if something is foreign or domestic,” said Rosenberg, once it “gets swept up in this vast ‘Wizard of Oz’-like noise machine.”

3) Bad actors are learning

Experts agree on one thing: The election interference tactics that social media platforms encounter in 2020 will look different from those they’ve trying to fend off since 2016.

“What we’re going to see is the continued evolution and development of new approaches, new experimentation trying to see what will work and what won’t,” said Lee Foster, who leads the information operations intelligence analysis team at the cybersecurity firm FireEye.

Foster said the “underlying motivations” of undermining democratic institutions and casting doubt on election results will remain constant, but the trolls have already evolved their tactics.

For instance, they’ve gotten better at obscuring their online activity to avoid automatic detection, even as social media platforms ramp up their use of artificial intelligence software to dismantle bot networks and eradicate inauthentic accounts.

“One of the challenges for the platforms is that, on the one hand, the public understandably demands more transparency from them about how they take down or identify state-sponsored attacks or how they take down these big networks of authentic accounts, but at the same time they can’t reveal too much at the risk of playing into bad actors’ hands,” said Oxford’s Marchal.

Researchers have already observed extensive efforts to distribute disinformation through user-generated posts — known as “organic” content — rather than the ads or paid messages that were prominent in the 2016 disinformation campaigns.

Foster, for example, cited trolls impersonating journalists or other more reliable figures to give disinformation greater legitimacy. And Marchal noted a rise in the use of memes and doctored videos, whose origins can be difficult to track down. Jesse Littlewood, vice president at advocacy group Common Cause, said social media posts aimed at voter suppression frequently appear no different from ordinary people sharing election updates in good faith — messages such as “you can text your vote” or “the election’s a different day” that can be “quite harmful.”

Tech companies insist they are learning, too. Since the 2016 election, Google, Facebook and Twitter have devoted security experts and engineers to tackling disinformation in national elections across the globe, including the 2018 midterms in the United States. The companies say they have gotten better at detecting and removing fake accounts, particularly those engaged in coordinated campaigns.

But other tactics may have escaped detection so far. NYU’s Barrett noted that disinformation-for-hire operations sometimes employed by corporations may be ripe for use in U.S. politics, if they’re not already.

He pointed to a recent experiment conducted by the cyber threat intelligence firm Recorded Future, which said it paid two shadowy Russian “threat actors” a total of just $6,050 to generate media campaigns promoting and trashing a fictitious company. Barrett said the project was intended “to lure out of the shadows firms that are willing to do this kind of work,” and demonstrated how easy it is to generate and sow disinformation.

Real-life examples include a hyper-partisan skewed news operation started by a former Fox News executive and Facebook’s accusations that an Israeli social media company profited from creating hundreds of fake accounts. That “shows that there are firms out there that are willing and eager to engage in this kind of underhanded activity,” Barrett said.

4) Not all lies are created equal

Facebook, Twitter and YouTube are largely united in trying to take down certain kinds of false information, such as targeted attempts to drive down voter turnout. But their enforcement has been more varied when it comes to material that is arguably misleading.

In some cases, the companies label the material factually dubious or use their algorithms to limit its spread. But in the lead-up to 2020, the companies’ rules are being tested by political candidates and government leaders who sometimes play fast and loose with the truth.

“A lot of the mainstream campaigns and politicians themselves tend to rely on a mix of fact and fiction,” Marchal said. “It’s often a lot of … things that contain a kernel of truth but have been distorted.”

One example is the flap over a Trump campaign ad — which appeared on Facebook, YouTube and some television networks — suggesting that former Vice President Joe Biden had pressured Ukraine into firing a prosecutor to squelch an investigation into an energy company whose board included Biden’s son Hunter. In fact, the Obama administration and multiple U.S. allies had pushed for removing the prosecutor for slow-walking corruption investigations. The ad “relies on speculation and unsupported accusations to mislead viewers,” the nonpartisan site FactCheck.org concluded.

The debate has put tech companies at the center of a tug of war in Washington. Republicans have argued for more permissive rules to safeguard constitutionally protected political speech, while Democrats have called for greater limits on politicians’ lies.

Democrats have especially lambasted Facebook for refusing to fact-check political ads, and have criticized Twitter for letting politicians lie in their tweets and Google for limiting candidates’ ability to finely tune the reach of their advertising — all examples, the Democrats say, of Silicon Valley ducking the fight against deception.

Jesse Blumenthal, who leads the tech policy arm of the Koch-backed Stand Together coalition, said expecting Silicon Valley to play truth cop places an undue burden on tech companies to litigate messy disputes over what’s factual.

“Most of the time the calls are going to be subjective, so what they end up doing is putting the platforms at the center of this rather than politicians being at the center of this,” he said.

Further complicating matters, social media sites have generally granted politicians considerably more leeway to spread lies and half-truths through their individual accounts and in certain instances through political ads. “We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying,” Facebook CEO Mark Zuckerberg said in an October speech at Georgetown University in which he defended his company’s policy.

But Democrats say tech companies shouldn’t profit off false political messaging.

“I am supportive of these social media companies taking a much harder line on what content they allow in terms of political ads and calling out lies that are in political ads, recognizing that that’s not always the easiest thing to draw those distinctions,” Democratic Rep. Pramila Jayapal of Washington state told POLITICO.

Article originally published on POLITICO Magazine

Related posts

Apple Card faces probe over discrimination complaint | ABS-CBN News

cell phone person

Something curious happened when a husband and wife recently compared their Apple Card spending limits.

David Heinemeier Hansson vented on Twitter that even though his spouse, Jamie Hansson, had a better credit score and other factors in her favor, her application for a credit line increase had been denied.

The prominent software developer wondered how his credit line could be 20 times higher, referring to Apple Card as a “sexist program” (with an expletive added for emphasis).

The card, a partnership between Apple and Goldman Sachs, made its debut in the United States in August.

“My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time,” he wrote Thursday on Twitter. “Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does.”

Hansson’s tweets caught the attention of more than just his 350,000 followers.

They struck a nerve with New York state regulators, who announced Saturday that they would investigate the algorithm used by Apple Card to determine the creditworthiness of applicants.

Algorithms are codes or a set of instructions used by computers, search engines and smartphone applications to perform tasks, from ordering food delivery to hailing a ride — and yes, applying for credit.

The criteria used by the Apple Card are now being scrutinized by the New York State Department of Financial Services.

“Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class violates New York law,” an agency spokeswoman said in a statement Saturday night.

“DFS is troubled to learn of potential discriminatory treatment in regards to credit limit decisions reportedly made by an algorithm of Apple Card, issued by Goldman Sachs, and the Department will be conducting an investigation to determine whether New York law was violated and ensure all consumers are treated equally regardless of sex,” the statement said.

An Apple spokeswoman directed questions to a Goldman Sachs spokesman, Andrew Williams, who said that the company could not comment publicly on individual customers.

“Our credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation or any other basis prohibited by law,” Williams said.

David Hansson did not respond to an interview request Saturday night.

His wife’s experience with the Apple Card, the first credit card offering by Goldman Sachs, does not appear to be an isolated case, however.

Steve Wozniak, who invented the Apple-1 computer with Steve Jobs and was a founder of the tech giant, responded to Hansson’s tweet with a similar account.

“The same thing happened to us,” Wozniak wrote. “I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.”

In addition to Goldman Sachs, Apple partnered with Mastercard on the Apple Card, which the companies hailed as a revolutionary “digital first” credit card that had no numbers and could be added to the Wallet app on the iPhone and used with Apple Pay.

A spokesman for Mastercard, which provides support for Apple Card’s global payments network, did not respond to a request for comment Saturday.

David Hansson, a Danish entrepreneur and California resident, is known for creating Ruby on Rails, a popular computer coding language used to create database-backed web applications. He is an author and decorated race car driver on the Le Mans circuit, according to a biography on his website.

In a subsequent tweet, he said that the Apple Card’s customer service representatives told his wife that they were not authorized to discuss the credit assessment process.

He said that customer service employees were unable to explain why the algorithm had designated her to be less creditworthy but had assured his wife that the bank was not discriminating against women.

An applicant’s credit score and income level are used by Goldman Sachs to determine creditworthiness, according to a support page for the Apple Card. Past due accounts, a checking account closed by a bank for overdrafts, liens and medical debts can negatively affect applications, the page stated.

On Friday, a day after David Hansson started railing on the Apple Card’s treatment of female credit applicants, he said his wife got a “VIP bump” to match his credit limit. He said that didn’t make up for the flawed algorithm used by Apple Card.

He said many women had shared similar experiences with him on Twitter and urged regulators to contact them.

“My thread is full of accounts from women who’ve been declared to be worse credit risks than their husbands, despite higher credit scores or incomes,” he said.

2019 The New York Times Company

Related posts

Facebook, free speech, and political ads – Columbia Journalism Review

A number of Facebook’s recent decisions have fueled a criticism that continues to follow the company, including the decision not to fact-check political advertising and the inclusion of Breitbart News in the company’s new “trusted sources” News tab. These controversies were stoked even further by Mark Zuckerberg’s speech at Georgetown University last week, where he tried—mostly unsuccessfully—to portray Facebook as a defender of free speech. CJR thought all of these topics were worth discussing with free-speech experts and researchers who focus on the power of platforms like Facebook, so we convened an interview series this week on our Galley discussion platform, featuring guests like Alex Stamos, former chief technology officer of Facebook, veteran tech journalist Kara Swisher, Jillian York of the Electronic Frontier Foundation, Harvard Law professor Jonathan Zittrain, and Stanford researcher Kate Klonick.

Stamos, one of the first to raise the issue of potential Russian government involvement on Facebook’s platform while he was the head of security there, said he had a number of issues with Zuckerberg’s speech, including the fact that he “compressed all of the different products into this one blob he called Facebook. That’s not a useful frame for pretty much any discussion of how to handle speech issues.” Stamos said the News tab is arguably a completely new category of product, a curated and in some cases paid-for selection of media, and that this means the company has much more responsibility for what appears there. Stamos also said that there are “dozens of Cambridge Analyticas operating today collecting sensitive data on individuals and using it to target ads for political campaigns. They just aren’t dumb enough to get their data through breaking an API agreement with Facebook.”

Ellen Goodman, co-founder of the Rutgers Institute for Information Policy & Law, said that Mark Zuckerberg isn’t the first to have to struggle with tensions between free speech and democratic discourse, “it’s just that he’s confronting these questions without any connection to press traditions, with only recent acknowledgment that he runs a media company, in the absence of any regulation, and with his hands on personal data and technical affordances that enable microtargeting.” Kate Klonick of Stanford said Zuckerberg spoke glowingly about early First Amendment cases, but got one of the most famous—NYT v Sullivan—wrong. “The case really stands for the idea of tolerating even untrue speech in order to empower citizens to criticize political figures,” Klonick said. “It is not about privileging political figures’ speech, which of course is exactly what the new Facebook policies do.”

Evelyn Douek, a doctoral student at Harvard Law and an affiliate at the Berkman Klein Center For Internet & Society, said most of Zuckerberg’s statements about his commitment to free speech were based on the old idea of a marketplace of ideas being the best path to truth. This metaphor has always been questionable, Douek says, “but it makes no sense at all in a world where Facebook constructs, tilts, distorts the marketplace with its algorithms that favor a certain kind of content.” She said Facebook’s amplification of certain kinds of information via the News Feed algorithm “is a cause of a lot of the unease with our current situation, especially because of the lack of transparency.” EFF director Jillian York said the political ad issue is a tricky one. “I do think that fact-checking political ads is important, but is this company capable of that? These days, I lean toward thinking that maybe Facebook just isn’t the right place for political advertising at all.”

Swisher said: “The problem is that this is both a media company, a telephone company and a tech company. As it is architected, it is impossible to govern. Out of convenience we have handed over the keys to them and we are cheap dates for doing so. You get a free map and quick delivery? They get billions and control the world.” Zittrain said the political ad fact-checking controversy is about more than just a difficult product feature. “Evaluating ads for truth is not a mere customer service issue that’s solvable by hiring more generic content staffers,” he said. “The real issue is that a single company controls far too much speech of a particular kind, and thus has too much power.” Dipayan Ghosh, who runs the Platform Accountability Project at Harvard, warned that Facebook’s policy to allow misinformation in political ads means a politician “will have the opportunity to engage in coordinated disinformation operations in precisely the same manner that the Russian disinformation agents did in 2016.”

Sign up for CJR‘s daily email

Today and tomorrow we will be speaking with Jameel Jaffer of the Knight First Amendment Institute, Claire Wardle of First Draft and Sam Lessin, a former VP of product at Facebook, so please tune in.

Here’s more on Facebook and speech:

Other notable stories:

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Related posts

This startup just raised $8 million to help busy doctors assess the cognitive health of 50 million seniors

tv

All over the globe, the population of people who are aged 65 and older is growing faster than every other age group. According to United Nations data, by 2050, one in six people in the world will be over age 65, up from one in 11 right now. Meanwhile, in Europe and North America, by 2050, one in four people could be 65 or older.

Unsurprisingly, startups increasingly recognize opportunities to cater to this aging population. Some are developing products to sell to individuals and their family members directly; others are coming up with ways to empower those who work directly with older Americans.

BrainCheck, a 20-person, Houston-based startup whose cognitive healthcare product aims to help physicians assess and track the mental health of their patients, is among the latter. Investors like what it has put together, too. Today, the startup is announcing $8 million in Series A funding co-led by S3 Ventures and Tensility Venture Partners.

We talked earlier today with BrainCheck co-founder and CEO Yael Katz to better understand what her company has created and why it might be of interest to doctors who don’t know about it. Our chat has been edited for length and clarity.

TC: You’re a neuroscientist. You started BrianCheck with David Eagleman, another neuroscientist and the CEO of NeoSensory, a company that develops devices for sensory substitution. Why? What’s the opportunity here?

YK: We looked across the landscape, and we realized that most cognitive assessment is [handled by] a subspecialty of clinical psychology called neuropsychology, where patients are given a series a tests and each is designed to probe a different type of brain function — memory, visual attention, reasoning, executive function. They measure speed and accuracy, and based on that, determine whether there’s a deficit in that domain. But the tests were classically done on paper and it was a lengthy process. We digitized them and gamified them and made them accessible to everyone who is upstream of neuropsychology, including neurologists and primary care doctors.

We created a tech solution that provides clinical decision support to physicians so they can manage patients’ cognitive health. There are 250,000 primary care physicians in the U.S. and 12,000 neurologists and [they’re confronting] what’s been called a silver tsunami. With so many becoming elderly, it’s not possible for them to address the need of the aging population without tech to help them.

TC: How does your product work, and how is it administered?

YK: An assessment is all done on an iPad and takes about 10 minutes. They’re typically administered in a doctor’s office by medical technicians, though they can be administered remotely through telemedicine, too.

TC: These are online quizzes?

YK: Not quizzes and not subjective questions like, ‘How do you think you’re doing?’ but rather objective tasks, like connect the dots, and which way is the center arrow pointing — all while measuring speed and accuracy.

TC: How much does it cost these doctors’ offices, and how are you getting word out?

YZ: We sell a monthly subscription to doctors and it’s a tiered pricing model as measured by volume. We meet doctors at conferences and we publish blog posts and white papers and through that process, we meet them and sell products to them, beginning with a free trial for 30 days, during which time we also give them a web demo.

[What we’re selling] is reimbursable by insurance because it helps them report on and optimize metrics like patient satisfaction. Medicare created a new code to compensate doctors for cognitive care planning, though it was rarely used because the requirements and knowledge involved was so complicated. When we came along, we said, let us help you do what you’re trying to do, and it’s been very rewarding.

TC: Say one of these assessments enables a non specialist to determine that someone is losing memory or can’t think as sharply. What then?

YZ: There’s a phrase: “Diagnose and adios.” Unfortunately, a lot of doctors used to see their jobs as being done once an assessment was made. It wasn’t appreciated that impairment and dementia are things you can address. But about one-third of dementia is preventable, and once you have the disease, it can be slowed.  It’s hard because it requires a lot of one-on-one work, so we created a tech solution that uses the output of tests to provide clinical support to physicians so they can manage patients’ cognitive health. We provide personalized recommendations in a way that’s scalable.

TC: Meaning you suggest an action plan for the doctors to pass along to their patients based on these assessments?

YZ: There are nine modifiable risk factors found to account for a third of [dementia cases], including certain medications that can exacerbate cognitive impairment, including poorly controlled cardiovascular health, hearing impairment and depression. People can have issues for many reasons — multiple sclerosis, epilepsy, Parkinson’s — but health conditions like major depression and physical conditions like cancer and treatments like chemotherapy can cause brain fog. We suggest a care plan that goes to the doctor who then uses that information and modifies it. A lot of it has to do with medication management.

A lot of the time, a doctor — and family members — don’t know how impaired a patient is. You can have a whole conversation with someone during a doctor’s visit who is regaling you with great conversation, then you realize they have massive cognitive deficits. These assessments kind of put everyone on the same page.

TC: You’ve raised capital. How will you use it to move your product forward?

YK: We’ll be combining our assessments with digital biomarkers like changing voice patterns and a test of eye movements. We’ve developed an eye-tracking technology and voice algorithms, but those are still in clinical development; we’re trying to get FDA approval for them now.

TC: Interesting that changing voice patterns can help you diagnose cognitive decline.

YK: We aren’t diagnosing disease. Think of us as a thermometer that [can highlight] how much impairment is there and in what areas and how it’s progressed over time.

TC: What can you tell readers who might worry about their privacy as it relates to your product?

YK: Our software is HIPAA compliant. We make sure our engineers are trained and up to date. The FDA requires that we put a lot of standards in place and we ensure that our database is built in accordance with best practices. I think we’re doing as good a job as anyone can.

Privacy is a concern in general. Unfortunately, companies big and small have to be ever vigilant about a data breach.

Related posts