Citizen Journalists Who Exposed Beijing’s Lies In Wuhan Have Suddenly Vanished

person
Citizen Journalists Who Exposed Beijing’s Lies In Wuhan Have Suddenly Vanished

As we reported late Thursday evening, the death toll from the viral outbreak on mainland China has surpassed 600. With global markets once again in the red, Bloomberg reports that Beijing has silenced two of the citizen journalists responsible for much of the horrifying footage seeping onto western social media.

As BBG’s reporter explains, Chinese citizen journalists Chen Qiushi and Fang Bin have effectively been “the world’s eyes and ears” inside Wuhan (much of the film produced by American news organizations has consisted of drone footage). In recent days, SCMP and other news organizations reporting on the ground and publishing in English have warned that Beijing has stepped up efforts to censor Chinese social media after allowing citizens to vent their frustrations and share news without the usual scrutiny.

On Wednesday, China said its censors would conduct “targeted supervision” on the largest social media platforms including Weibo, Tencent’s WeChat and ByteDance’s Douyin. All in an effort to mask the dystopian nightmare that life in cities like Wuhan has become.

But that brief period of informational amnesty is now over, apparently. Fang posted a dramatic video on Friday showing him being forcibly detained and dragged off to a ‘quarantine’. He was detained over a video showing corpses piled up in a Wuhan hospital. However, he has already been released.

Chen, meanwhile, seems to have vanished without a trace, and is believed to still be in government detention. We shared one of Chen’s more alarming videos documenting the severe medical supply shortages and outnumbered medical personnel fighting a ‘losing battle’ against the outbreak.

The crackdown on these journalists comes amid an outpouring of public anger over the death of a doctor who was wrongly victimized by police after attempting to warn the public about the outbreak. Beijing tried to cover up the death, denying it to the western press before the local hospital confirmed.

The videos supplied by the two citizen journos have circulated most freely on twitter, which is where most in-the-know Chinese go for their latest information about the outbreak. Many “hop” the “great firewall” via a VPN.

“There’s a lot more activity happening on Twitter compared with Weibo and WeChat,” said Maya Wang, senior China researcher at Human Rights Watch. There has been a Chinese community on Jack Dorsey’s short-message platform since before President Xi Jinping rose to power, she added, but the recent crackdown has weakened that social circle.

Chen has now been missing for more than 24 hours, according to several friends in contact with BBG News.

Chen has been out of contact for a prolonged period of time. His friends posted a message on his Twitter account saying he has been unreachable since 7 p.m. local time on Thursday. In a texted interview, Bloomberg News’s last question to Chen was whether he was concerned about his safety as he’s among the few people reporting the situation on the front lines.

It’s all part of the great crackdown that Beijing is enforcing, even as the WHO continues to praise the Communist Party for its ‘transparency’.

“After lifting the lid briefly to give the press and social media some freedom,” said Wang about China’s ruling Communist Party, the regime “is now reinstating its control over social media, fearing it could lead to a wider-spread panic.”

With a little luck, the world might soon learn Chen’s whereabouts. Then again, there’s always the chance that he’s never heard from again.


Tyler Durden

Related posts

Facebook keeps policy protecting political ads | ABS-CBN News

ad
Facebook logos are seen on a screen in this picture illustration taken Dec. 2, 2019. Johanna Geron, Reuters/file

SAN FRANCISCO — Defying pressure from Congress, Facebook said on Thursday that it would continue to allow political campaigns to use the site to target advertisements to particular slices of the electorate and that it would not police the truthfulness of the messages sent out.

The stance put Facebook, the most important digital platform for political ads, at odds with some of the other large tech companies, which have begun to put new limits on political ads.

Facebook’s decision, telegraphed in recent months by executives, is likely to harden criticism of the company heading into this year’s presidential election.

Political advertising cuts to the heart of Facebook’s outsize role in society, and the company has found itself squeezed between liberal critics, who want it to do a better job of policing its various social media platforms, and conservatives, who say their views are being unfairly muzzled.

The issue has raised important questions regarding how heavy a hand technology companies like Facebook — which also owns Instagram and the messaging app WhatsApp — and Google should exert when deciding what types of political content they will and will not permit.

By maintaining a status quo, Facebook executives are essentially saying they are doing the best they can without government guidance and see little benefit to the company or the public in changing.

In a blog post, a company official echoed Facebook’s earlier calls for lawmakers to set firm rules.

“In the absence of regulation, Facebook and other companies are left to design their own policies,” Rob Leathern, Facebook’s director of product management overseeing the advertising integrity division, said in the post. “We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public.”

Other social media companies have decided otherwise, and some had hoped Facebook would quietly follow their lead. In late October, Twitter’s chief executive, Jack Dorsey, banned all political advertising from his network, citing the challenges that novel digital systems present to civic discourse. Google quickly followed suit with limits on political ads across some of its properties, though narrower in scope.

Reaction to Facebook’s policy broke down largely along party lines.

The Trump campaign, which has been highly critical of any attempts by technology companies to regulate political advertising and has already spent more than $27 million on the platform, largely supported Facebook’s decision not to interfere in targeting ads or to set fact-checking standards.

“Our ads are always accurate so it’s good that Facebook won’t limit political messages because it encourages more Americans to be involved in the process,” said Tim Murtaugh, a spokesman for the Trump campaign. “This is much better than the approaches from Twitter and Google, which will lead to voter suppression.”

Democratic presidential candidates and outside groups decried the decision.

“Facebook is paying for its own glowing fake news coverage, so it’s not surprising they’re standing their ground on letting political figures lie to you,” Sen. Elizabeth Warren said on Twitter.

Warren, who has been among the most critical of Facebook and regularly calls for major tech companies to be broken up, reiterated her stance that the social media company should face tougher policies.

The Biden campaign was similarly critical. The campaign has confronted Facebook over an ad run by President Donald Trump’s campaign that attacked Joe Biden’s record on Ukraine.

“Donald Trump’s campaign can (and will) still lie in political ads,” Bill Russo, the deputy communications director for Biden, said in a statement. “Facebook can (and will) still profit off it. Today’s announcement is more window dressing around their decision to allow paid misinformation.”

But many Democratic groups willing to criticize Facebook had to walk a fine line; they have pushed for more regulation when it comes to fact-checking political ads, but they have been adamantly opposed to any changes to the ad-targeting features.

On Thursday, some Democratic outside groups welcomed Facebook’s decision not to limit micro-targeting, but still thought the policy fell short.

“These changes read to us mostly as a cover for not making the change that is most vital: ensuring politicians are not allowed to use Facebook as a tool to lie to and manipulate voters,” said Madeline Kriger, who oversees digital ad buying at Priorities USA, a Democratic super PAC.

Other groups, however, said Facebook had been more thoughtful about political ads than its industry peers.

“Facebook opted against limiting ad targeting, because doing so would have unnecessarily restricted a valuable tool that campaigns of all sizes rely on for fundraising, registering voters, building crowds and organizing volunteers,” said Tara McGowan, chief executive of Acronym, a non-profit group that works on voter organization and progressive causes.

Facebook has played down the business opportunity in political ads, saying the vast majority of its revenue came from commercial, not political, ads. But lawmakers have noted that Facebook ads could be a focal point of Trump’s campaign as well as those of top Democrats.

Facebook’s hands-off ad policy has already allowed for misleading advertisements. In October, a Facebook ad from the Trump campaign made false accusations about Biden and his son, Hunter Biden. The ad quickly went viral and was viewed by millions. After the Biden campaign asked Facebook to take down the ad, the company refused.

“Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” Facebook’s head of global elections policy, Katie Harbath, wrote in the letter to the Biden campaign.

In an attempt to provoke Facebook, Warren’s presidential campaign ran an ad falsely claiming that the company’s chief executive, Mark Zuckerberg, was backing the reelection of Trump. Facebook did not take the ad down.

Criticism seemed to stiffen Zuckerberg’s resolve. Company officials said he and Sheryl Sandberg, Facebook’s president, had ultimately made the decision to stand firm.

In a strongly worded speech at Georgetown University in October, Zuckerberg said he believed in the power of unfettered speech, including in paid advertising, and did not want to be in the position to police what politicians could and could not say to constituents. Facebook’s users, he said, should be allowed to make those decisions for themselves.

“People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society,” he said.

Facebook officials have repeatedly said significant changes to its rules for political or issue ads could harm the ability of smaller, less well-funded organizations to raise money and organize across the network.

Instead of overhauling its policies, Facebook has made small tweaks. Leathern said Facebook would add greater transparency features to its library of political advertising in the coming months, a resource for journalists and outside researchers to scrutinize the types of ads run by the campaigns.

Facebook also will add a feature that allows users to see fewer campaign and political issue ads in their news feeds, something the company has said many users have requested.

There was considerable debate inside Facebook about whether it should change. Late last year, hundreds of employees supported an internal memo that called on Zuckerberg to limit the abilities of Facebook’s political advertising products.

On Dec. 30, Andrew Bosworth, the head of Facebook’s virtual and augmented reality division, wrote on his internal Facebook page that, as a liberal, he found himself wanting to use the social network’s powerful platform against Trump.

But Bosworth said that even though keeping the current policies in place “very well may lead to” Trump’s reelection, it was the right decision. Dozens of Facebook employees pushed back on Bosworth’s conclusions, arguing in the comments section below his post that politicians should be held to the same standard that applies to other Facebook users.

For now, Facebook appears willing to risk disinformation in support of unfettered speech.

“Ultimately, we don’t think decisions about political ads should be made by private companies,” Leathern said. “Frankly, we believe the sooner Facebook and other companies are subject to democratically accountable rules on this, the better.”

2020 The New York Times Company

Related posts

Twitter makes global changes to comply with privacy laws

Twitter Inc is updating its global privacy policy to give users more information about what data advertisers might receive and is launching a site to provide clarity on its data protection efforts, the company said on Monday.

The changes, which will take effect on Jan. 1, 2020, will comply with the California Consumer Privacy Act (CCPA).

The California law requires large businesses to give consumers more transparency and control over their personal information, such as allowing them to request that their data be deleted and to opt-out of having their data sold to third parties.

ALSO READ: FG to galvanise mining sector with downstream mineral value chain initiative

Social media companies including Facebook Inc and Alphabet Inc’s Google have come under scrutiny on data privacy issues, fueled by Facebook’s Cambridge Analytica scandal in which personal data were harvested from millions of users without their consent.

Twitter also announced on Monday that it is moving the accounts of users outside of the United States and European Union which were previously contracted by Twitter International Company in Dublin, Ireland, to the San Francisco-based Twitter Inc.

The company said this move would allow it the flexibility to test different settings and controls with these users, such as additional opt-in or opt-out privacy preferences, that would likely be restricted by the General Data Protection Regulation (GDPR), Europe’s landmark digital privacy law.

“We want to be able to experiment without immediately running afoul of the GDPR provisions,” Twitter’s data protection officer Damien Kieran told Reuters in a phone interview.

“The goal is to learn from those experiments and then to provide those same experiences to people all around the world,” he said.

The company, which said it has upped its communications about data and security-related disclosures over the last two years, emphasized in a Monday blog post that it was working to upgrade systems and build privacy into new products.

In October, Twitter announced it had found that phone numbers and email addresses used for two-factor authentication may inadvertently have been used for advertising purposes.

Twitter’s new privacy site, dubbed the ‘Twitter Privacy Center’ is part of the company’s efforts to showcase its work on data protection and will also give users another route to access and download their data.

Twitter joins other internet companies who have recently staked out their positions ahead of CCPA coming into effect. Last month, Microsoft Corp said it would honor the law throughout the United States and Google told clients that it would let sites and apps using its advertising tools block personalized ads as part of its efforts to comply with CCPA.

Source: Reuters

Related posts

Why the fight against disinformation, sham accounts and trolls won’t be any easier in 2020

2020 Election

The big tech companies have announced aggressive steps to keep trolls, bots and online fakery from marring another presidential election — from Facebook’s removal of billions of fake accounts to Twitter’s spurning of all political ads.

But it’s a never-ending game of whack-a-mole that’s only getting harder as we barrel toward the 2020 election. Disinformation peddlers are deploying new, more subversive techniques and American operatives have adopted some of the deceptive tactics Russians tapped in 2016. Now, tech companies face thorny and sometimes subjective choices about how to combat them — at times drawing flak from both Democrats and Republicans as a result.

This is our roundup of some of the evolving challenges Silicon Valley faces as it tries to counter online lies and bad actors heading into the 2020 election cycle:

1) American trolls may be a greater threat than Russians

Russia-backed trolls notoriously flooded social media with disinformation around the presidential election in 2016, in what Robert Mueller’s investigators described as a multimillion-dollar plot involving years of planning, hundreds of people and a wave of fake accounts posting news and ads on platforms like Facebook, Twitter and Google-owned YouTube.

This time around — as experts have warned — a growing share of the threat is likely to originate in America.

“It’s likely that there will be a high volume of misinformation and disinformation pegged to the 2020 election, with the majority of it being generated right here in the United States, as opposed to coming from overseas,” said Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights.

Barrett, the author of a recent report on 2020 disinformation, noted that lies and misleading claims about 2020 candidates originating in the U.S. have already spread across social media. Those include manufactured sex scandals involving South Bend, Ind., Mayor Pete Buttigieg and Sen. Elizabeth Warren (D-Mass.) and a smear campaign calling Sen. Kamala Harris (D-Calif.) “not an American black” because of her multiracial heritage. (The latter claim got a boost on Twitter from Donald Trump Jr.)

Before last year’s midterm elections, Americans similarly amplified fake messages such as a “#nomenmidterms” hashtag that urged liberal men to stay home from the polls to make “a Woman’s Vote Worth more.” Twitter suspended at least one person — actor James Woods — for retweeting that message.

“A lot of the disinformation that we can identify tends to be domestic,” said Nahema Marchal, a researcher at the Oxford Internet Institute’s Computational Propaganda Project. “Just regular private citizens leveraging the Russian playbook, if you will, to create … a divisive narrative, or just mixing factual reality with made-up facts.”

Tech companies say they’ve broadened their fight against disinformation as a result. Facebook, for instance, announced in October that it had expanded its policies against “coordinated inauthentic behavior” to reflect a rise in disinformation campaigns run by non-state actors, domestic groups and companies. But people tracking the spread of fakery say it remains a problem, especially inside closed groups like those popular on Facebook.

2) And policing domestic content is tricky

U.S. law forbids foreigners from taking part in American political campaigns — a fact that made it easy for members of Congress to criticize Facebook for accepting rubles as payment for political ads in 2016.

But Americans are allowed, even encouraged, to partake in their own democracy — which makes things a lot more complicated when they use social media tools to try to skew the electoral process. For one thing, the companies face a technical challenge: Domestic meddling doesn’t leave obvious markers such as ads written in broken English and traced back to Russian internet addresses.

More fundamentally, there’s often no clear line between bad-faith meddling and dirty politics. It’s not illegal to run a mud-slinging campaign or engage in unscrupulous electioneering. And the tech companies are wary of being seen as infringing on American’s right to engage in political speech — all the more so as conservatives such as President Donald Trump accuse them of silencing their voices.

Plus, the line between foreign and domestic can be blurry. Even in 2016, the Kremlin-backed troll farm known as the Internet Research Agency relied on Americans to boost their disinformation. Now, claims with hazy origins are being picked up without need for a coordinated 2016-style foreign campaign. Simon Rosenberg, a longtime Democratic strategist who has spent recent years focused on online disinformation, points to Trump’s promotion of the theory that Ukraine significantly meddled in the 2016 U.S. election, a charge that some experts trace back to Russian security forces.

“It’s hard to know if something is foreign or domestic,” said Rosenberg, once it “gets swept up in this vast ‘Wizard of Oz’-like noise machine.”

3) Bad actors are learning

Experts agree on one thing: The election interference tactics that social media platforms encounter in 2020 will look different from those they’ve trying to fend off since 2016.

“What we’re going to see is the continued evolution and development of new approaches, new experimentation trying to see what will work and what won’t,” said Lee Foster, who leads the information operations intelligence analysis team at the cybersecurity firm FireEye.

Foster said the “underlying motivations” of undermining democratic institutions and casting doubt on election results will remain constant, but the trolls have already evolved their tactics.

For instance, they’ve gotten better at obscuring their online activity to avoid automatic detection, even as social media platforms ramp up their use of artificial intelligence software to dismantle bot networks and eradicate inauthentic accounts.

“One of the challenges for the platforms is that, on the one hand, the public understandably demands more transparency from them about how they take down or identify state-sponsored attacks or how they take down these big networks of authentic accounts, but at the same time they can’t reveal too much at the risk of playing into bad actors’ hands,” said Oxford’s Marchal.

Researchers have already observed extensive efforts to distribute disinformation through user-generated posts — known as “organic” content — rather than the ads or paid messages that were prominent in the 2016 disinformation campaigns.

Foster, for example, cited trolls impersonating journalists or other more reliable figures to give disinformation greater legitimacy. And Marchal noted a rise in the use of memes and doctored videos, whose origins can be difficult to track down. Jesse Littlewood, vice president at advocacy group Common Cause, said social media posts aimed at voter suppression frequently appear no different from ordinary people sharing election updates in good faith — messages such as “you can text your vote” or “the election’s a different day” that can be “quite harmful.”

Tech companies insist they are learning, too. Since the 2016 election, Google, Facebook and Twitter have devoted security experts and engineers to tackling disinformation in national elections across the globe, including the 2018 midterms in the United States. The companies say they have gotten better at detecting and removing fake accounts, particularly those engaged in coordinated campaigns.

But other tactics may have escaped detection so far. NYU’s Barrett noted that disinformation-for-hire operations sometimes employed by corporations may be ripe for use in U.S. politics, if they’re not already.

He pointed to a recent experiment conducted by the cyber threat intelligence firm Recorded Future, which said it paid two shadowy Russian “threat actors” a total of just $6,050 to generate media campaigns promoting and trashing a fictitious company. Barrett said the project was intended “to lure out of the shadows firms that are willing to do this kind of work,” and demonstrated how easy it is to generate and sow disinformation.

Real-life examples include a hyper-partisan skewed news operation started by a former Fox News executive and Facebook’s accusations that an Israeli social media company profited from creating hundreds of fake accounts. That “shows that there are firms out there that are willing and eager to engage in this kind of underhanded activity,” Barrett said.

4) Not all lies are created equal

Facebook, Twitter and YouTube are largely united in trying to take down certain kinds of false information, such as targeted attempts to drive down voter turnout. But their enforcement has been more varied when it comes to material that is arguably misleading.

In some cases, the companies label the material factually dubious or use their algorithms to limit its spread. But in the lead-up to 2020, the companies’ rules are being tested by political candidates and government leaders who sometimes play fast and loose with the truth.

“A lot of the mainstream campaigns and politicians themselves tend to rely on a mix of fact and fiction,” Marchal said. “It’s often a lot of … things that contain a kernel of truth but have been distorted.”

One example is the flap over a Trump campaign ad — which appeared on Facebook, YouTube and some television networks — suggesting that former Vice President Joe Biden had pressured Ukraine into firing a prosecutor to squelch an investigation into an energy company whose board included Biden’s son Hunter. In fact, the Obama administration and multiple U.S. allies had pushed for removing the prosecutor for slow-walking corruption investigations. The ad “relies on speculation and unsupported accusations to mislead viewers,” the nonpartisan site FactCheck.org concluded.

The debate has put tech companies at the center of a tug of war in Washington. Republicans have argued for more permissive rules to safeguard constitutionally protected political speech, while Democrats have called for greater limits on politicians’ lies.

Democrats have especially lambasted Facebook for refusing to fact-check political ads, and have criticized Twitter for letting politicians lie in their tweets and Google for limiting candidates’ ability to finely tune the reach of their advertising — all examples, the Democrats say, of Silicon Valley ducking the fight against deception.

Jesse Blumenthal, who leads the tech policy arm of the Koch-backed Stand Together coalition, said expecting Silicon Valley to play truth cop places an undue burden on tech companies to litigate messy disputes over what’s factual.

“Most of the time the calls are going to be subjective, so what they end up doing is putting the platforms at the center of this rather than politicians being at the center of this,” he said.

Further complicating matters, social media sites have generally granted politicians considerably more leeway to spread lies and half-truths through their individual accounts and in certain instances through political ads. “We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying,” Facebook CEO Mark Zuckerberg said in an October speech at Georgetown University in which he defended his company’s policy.

But Democrats say tech companies shouldn’t profit off false political messaging.

“I am supportive of these social media companies taking a much harder line on what content they allow in terms of political ads and calling out lies that are in political ads, recognizing that that’s not always the easiest thing to draw those distinctions,” Democratic Rep. Pramila Jayapal of Washington state told POLITICO.

Article originally published on POLITICO Magazine

Related posts

Dust over death penalty proposal for hate speech

person

A groundswell of opposition is building up against the death penalty proposal for hate speech.
The bill, which is sponsored by Deputy Chief Whip Senator Aliyu Sabi Abdullahi (Niger North), passed the first reading in the Senate yesterday.
Titled: “National Commission for the Prohibition of Hate Speeches (Establishment, etc) Bill, 2019”, the bill also proposes the setting up of a Commission on hate speech.
Last week, the Senate introduced a bill to regulate the social media to punish what it termed “abuse of social media” with a three-year jail term or N150,000 option of fine or both.

The Social Media Regulation Bill titled: “Protection from Internet falsehood and manipulations bill, 2019” is sponsored by Senator Mohammed Sani Musa (Niger East).

Minister of Information and Culture Alhaji Lai Mohammed has said that the Federal Government is poised to regulate the social media.
The Peoples Democratic Party (PDP) caucus in the Senate has vowed to oppose any proposed legislation that would unduly infringe on the rights of Nigerians.
Minority Leader Enyinnaya Abaribe said this while reacting to concerns on the Social Media Bill raised by members of the Leadership and Accountability Initiative, who visited him at the National Assembly.
Abaribe said the PDP senators would oppose the bill if it threatened the fundamental rights of Nigerians guaranteed in Section 39 of the 1999 Constitution as amended.
Abaribe noted that there were already laws that dealt with issues the proposed law seeks to regulate.
He urged Nigerians to ensure mutual respect while freely expressing their views.
Abaribe said: “There is no speed with which this Bill is being passed. The first reading of a Bill is automatic. We can’t make a comment on what is still on the first stage.
“What I can assure you is that this Senate can’t be a party to removing the rights of Nigerians under any guise. Section 39 of the Constitution talks about our freedom as citizens. The 9th Senate will not abridge your rights.
“I don’t think Nigerians who fought and paid the supreme price to entrench this democracy will easily give it away and make us go back to the dark days.
“Rest assured that when we get to that point, we will stand for the people. Every Bill that passes here must pass through the rigours to ensure that it protects the rights of over 200 million Nigerians.
“We have a plethora of laws that can be used to drive the question of driving a free society. While social media can be good, it can also be bad. I am a victim of social media.
“As much as there is freedom, yours stops where another person’s own starts. We urge Nigerians not to propagate falsehood or fake news. Our job is to guarantee the freedoms and rights of both sides.”
Leader of the group, Nwaruruahu Shield, insisted that since there were already existing laws dealing with Defamation, it is superfluous to introduce a fresh anti-social media Bill.
Former Vice President Atiku Abubakar described the introduction of the Anti-Hate Speech Bill by the Senate as abuse of legislative process and called on the federal lawmakers to “stop the folly”.
In a statement by his media adviser, Mr Paul Ibe, the former Vice President said the bill sought to violate the constitutionally guaranteed right to freedom of speech of Nigerians.
“It is prudent to build upon the tolerance inherited from those years and not shrink the democratic space to satisfy personal and group interests.
“Freedom of Speech was not just bestowed to Nigerians by the Constitution of the Federal Republic of Nigeria, 1999 (as amended), it is also a divine right given to all men by their Creator.
“History is littered with the very negative unintended consequences that result when this God given right is obstructed by those who seek to intimidate the people rather than accommodate them.
“We should be reminded that history does not repeat itself. Rather, men repeat history. And often, to disastrous consequences”, Atiku said.
He added: “We are now the world headquarters for extreme poverty as well as the global epicentre of out-of-school children. Our economy is smaller than it was in 2015, while our population is one of the world’s fastest growing.
“We have retrogressed in the Corruption Perception Index of Transparency International, from the position we held four years ago, and our Human Development Indexes are abysmally low.
“It therefore begs the question: should we not rather make laws to tackle these pressing domestic challenges, instead of this Bill, which many citizens consider obnoxious?”.
Senator Abdullahi sponsored the same Hate Speech Bill during the Eight Senate but it attracted widespread condemnation from Nigerians. It never returned for second reading before the eighth Senate elapsed
The Bill proposes that the establishment of a Commission to enforce hate speech laws across the country, and ensure the “elimination” of hate speech.
For offences such as harassment on grounds of ethnicity or race, the Bill had proposed that the offender shall be sentenced to “not less than a five-year jail term or a fine of not less than N10 million or both.”
The Billproposes that, “A person who uses, publishes, presents, produces, plays, provides, distributes and/or directs the performance of any material, written and/or visual, which is threatening, abusive or insulting or involves the use of threatening, abusive or insulting words or behaviour” committed an offence.
It added that the charge would be justified if such a person intends to stir up “ethnic hatred”.
The Bill makes provision that any offender found guilty under the Act when passed would die by hanging.
“Any person who commits an offence under this section shall be liable to life imprisonment and where the act causes any loss of life, the person shall be punished with death by hanging,” the Bill said.
The Bill provides that “A person who uses, publishes, presents, produces, plays, provided, distributes and/or directs the performance of any material, written and or visual which is threatening, abusive or insulting or involves the use of threatening, abusive or insulting words or behavior commits an offence if such person intends thereby to stir up ethnic hatred, or having regard to all the circumstances, ethnic hatred is likely to be stirred up against any person or person from such an ethnic group in Nigeria.
“Any person who commits an offence under this section shall be liable to life imprisonment and where the act causes any loss of life, the person shall be punished with death by hanging.
“In this section, ethnic hatred means hatred against a group if person’s from any ethical group indigenous today Nigeria.
On discrimination against persons, the Bill also provides that: “For the purpose of this act, a person who discriminates against another person if on ethnic grounds the person without any lawful justification treats another Nigerian citizen less favourably than he treats or would treat other person from his ethnic or another ethnic group and/or that on grounds of ethnicity a person put another person at a particular disadvantage when compared with other persons from other nationality of Nigeria.
“A person also discriminates against another person if, in any circumstances relevant for the purposes referred to in subsection (1) (b), he applies to that person of any provision, criterion or practice which he applies or would apply equally to persons not of the same race, ethnic or national origins as that other.”
On harassment on the basis of ethnicity, the Bill further provides that “A person (who) subjects another to harassment on the basis of ethnicity for the purposes of this section where on ethnic grounds, he justifiably engages in a conduct which has the purpose or effect of: a) Violating that other person’s dignity or b) Creating an intimidating, hostile, degrading, humiliating, or offensive environment for the person subjected to the harassment.
“Conduct shall be regarded as having the effect specified in subsection (1) (a) or (b) of this section if, having regard to all circumstances, including in particular the perception of that other person, it should resonably be considered as saying that effect.
“A person who subjects another to harassment on the basis of ethnicity commits an offence and shall be liable on conviction to an imprisonment for a term not less than ten years, or to a fine of not less than ten million naira, or to both.”
The objectives and functions of the proposed commission on Hate Speech, according to the Bill includes to facilitate and promote a harmonious peaceful co-existence within the people of all ethnic groups indigenous to Nigeria and more importantly to achieve this objective by ensuring the elimination of all forms of hate speeches in Nigeria, and to advise the Government of the Federal Republic of Nigeria on all aspects thereof.

Related posts

Facebook, free speech, and political ads – Columbia Journalism Review

A number of Facebook’s recent decisions have fueled a criticism that continues to follow the company, including the decision not to fact-check political advertising and the inclusion of Breitbart News in the company’s new “trusted sources” News tab. These controversies were stoked even further by Mark Zuckerberg’s speech at Georgetown University last week, where he tried—mostly unsuccessfully—to portray Facebook as a defender of free speech. CJR thought all of these topics were worth discussing with free-speech experts and researchers who focus on the power of platforms like Facebook, so we convened an interview series this week on our Galley discussion platform, featuring guests like Alex Stamos, former chief technology officer of Facebook, veteran tech journalist Kara Swisher, Jillian York of the Electronic Frontier Foundation, Harvard Law professor Jonathan Zittrain, and Stanford researcher Kate Klonick.

Stamos, one of the first to raise the issue of potential Russian government involvement on Facebook’s platform while he was the head of security there, said he had a number of issues with Zuckerberg’s speech, including the fact that he “compressed all of the different products into this one blob he called Facebook. That’s not a useful frame for pretty much any discussion of how to handle speech issues.” Stamos said the News tab is arguably a completely new category of product, a curated and in some cases paid-for selection of media, and that this means the company has much more responsibility for what appears there. Stamos also said that there are “dozens of Cambridge Analyticas operating today collecting sensitive data on individuals and using it to target ads for political campaigns. They just aren’t dumb enough to get their data through breaking an API agreement with Facebook.”

Ellen Goodman, co-founder of the Rutgers Institute for Information Policy & Law, said that Mark Zuckerberg isn’t the first to have to struggle with tensions between free speech and democratic discourse, “it’s just that he’s confronting these questions without any connection to press traditions, with only recent acknowledgment that he runs a media company, in the absence of any regulation, and with his hands on personal data and technical affordances that enable microtargeting.” Kate Klonick of Stanford said Zuckerberg spoke glowingly about early First Amendment cases, but got one of the most famous—NYT v Sullivan—wrong. “The case really stands for the idea of tolerating even untrue speech in order to empower citizens to criticize political figures,” Klonick said. “It is not about privileging political figures’ speech, which of course is exactly what the new Facebook policies do.”

Evelyn Douek, a doctoral student at Harvard Law and an affiliate at the Berkman Klein Center For Internet & Society, said most of Zuckerberg’s statements about his commitment to free speech were based on the old idea of a marketplace of ideas being the best path to truth. This metaphor has always been questionable, Douek says, “but it makes no sense at all in a world where Facebook constructs, tilts, distorts the marketplace with its algorithms that favor a certain kind of content.” She said Facebook’s amplification of certain kinds of information via the News Feed algorithm “is a cause of a lot of the unease with our current situation, especially because of the lack of transparency.” EFF director Jillian York said the political ad issue is a tricky one. “I do think that fact-checking political ads is important, but is this company capable of that? These days, I lean toward thinking that maybe Facebook just isn’t the right place for political advertising at all.”

Swisher said: “The problem is that this is both a media company, a telephone company and a tech company. As it is architected, it is impossible to govern. Out of convenience we have handed over the keys to them and we are cheap dates for doing so. You get a free map and quick delivery? They get billions and control the world.” Zittrain said the political ad fact-checking controversy is about more than just a difficult product feature. “Evaluating ads for truth is not a mere customer service issue that’s solvable by hiring more generic content staffers,” he said. “The real issue is that a single company controls far too much speech of a particular kind, and thus has too much power.” Dipayan Ghosh, who runs the Platform Accountability Project at Harvard, warned that Facebook’s policy to allow misinformation in political ads means a politician “will have the opportunity to engage in coordinated disinformation operations in precisely the same manner that the Russian disinformation agents did in 2016.”

Sign up for CJR‘s daily email

Today and tomorrow we will be speaking with Jameel Jaffer of the Knight First Amendment Institute, Claire Wardle of First Draft and Sam Lessin, a former VP of product at Facebook, so please tune in.

Here’s more on Facebook and speech:

Other notable stories:

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Related posts

Reprieve on the way for 119 Nigerians on death row in Malaysia | P.M. News

person

Malaysia execution for drug trafficking

The 119 Nigerians on death row in Malaysia may be saved from the executioner if the country’s legislature passed a bill to abolish the death penalty as being proposed by the country’s law minister Liew Vui Keong.

The minister plans to table the bill in the March 2020 sitting of Dewan Rakyat, Malaysia’s Lower House of Parliament.

According to Amnesty’s latest report, Fatally Flawed: Why Malaysia must abolish the Death Penalty, 1,281 people are on death row as of February 2019.

Foreigners make up a significant 44 percent, 568 people, with Nigerians accounting for 119. They were sentenced to death for drug trafficking.

“Nationals from Nigeria made up 21 per cent of this group, with those from Indonesia (16%), Iran (15%), India (10%), Philippines (8%) and Thailand (6%) following suit”, Amnesty said.

Amnesty International latest report: Nigerians on death row may get some reprieve soon

“A significant 73 per cent of all those under sentence of death have been convicted of drug trafficking under Section 39(b) of the Dangerous of Drugs Act, 1952 — an extremely high figure for an offence that does not even meet the threshold of the ‘most serious crimes’ under international law and standards and for which the death penalty must not be imposed,” AI said in the report.

The Nigerians have not been executed because of a moratorium on executions in place since October 2018 as the government mulls law reform.

A special task force led by immediate past chief justice Richard Malanjum has also been set up to study alternative penalties for laws carrying mandatory capital punishment.

Amnesty report points at various flaws in the Malaysian legal system, including denial of complete legal aid to foreigners.

Amnesty also said that insufficient funding of legal aid also hinders Malaysians from accessing proper representation, especially those who live in rural areas and who are not able to afford a lawyer.

“It is further concerning that because of how legal aid is structured in the different schemes that provide no free legal representatives until the trial is due to start, many defendants are left awaiting trial without any legal assistance for significant periods that have extended from months to, in most cases, two to five years,” the report read.

For foreign nationals, the report noted delays of more than 24 hours to several days before their respective embassies were informed of their arrests. This is despite international law which states that prompt communication is necessary.

Amnesty, which campaigns to end to capital punishment worldwide, called for competent legal representation be made available to all defendants.

It also called upon the police to inform all detainees of their right to legal aid.

‘Secretive’ pardons, executions

Aside from the pre and post-trial stages, gaps in legal aid also affected the ability of inmates to acquire assistance when filing their pardon petitions, noted Amnesty.

When it was available, the report cited a lawyer’s testimony about how prison officials pre-selected inmates who would be able to receive legal aid, all of whom were Malaysians.

“The decision on who gets that support is not transparent and creates an additional degree of arbitrariness and discrimination in the death penalty system,” it said.

The NGO further urged the government to solve the delays and lack of transparency in clemency proceedings.

Pardons can only be granted by the Yang Di-Pertuan Agong and the state rulers after consulting the Pardons Board. However, clear procedures for them are not laid out in Malaysian law except for some guidelines in the Prison Regulations 2000.

In practice, the report noted that inmates are often informed of their right to clemency but not the criteria for pardon consideration.

Inmates and their families are often left without any news from the authorities for a long period after submitting their petition.

The report also noted instances of delays by prison authorities in communicating the result of a pardon petition to an inmate’s family.

In the case of rejected clemency petitions, Amnesty noted that families were not informed of the date and time of impending executions except that they would happen “soon”.

“Some of the letters handed over to the families were dated two weeks earlier, suggesting that the prison authorities had held on to this information until only days before the scheduled date of the hangings,” it said.

Amnesty urged Pardon Boards to disclose all relevant information to inmates to allow them to prepare adequately for the pardon petitions.

It also wanted the boards to promptly update inmates, their families and their lawyers on the progress of their applications.

Following objections to abolishing the death penalty in total, the Pakatan Harapan government is now looking at replacing the mandatory death penalty for 11 serious criminal offences to allow for judicial discretion.

Share this:

Related posts

BBC criticised for ‘lack of transparency’ on Naga

BBC
Media watchdog Ofcom has said it has “serious concerns around the transparency of the BBC’s complaints process” following its handling of the Naga Munchetty case.

The BBC’s director general Lord Hall recently reversed a decision to partially uphold a complaint against the BBC Breakfast host for comments she made about US President Donald Trump.

Ofcom criticised the “lack of transparency” around the original ruling, which sparked a public outcry, and Lord Hall’s subsequent U-turn.

The regulator has decided not to investigate Munchetty’s exchange with co-host Dan Walker, saying it did not break its broadcasting rules around impartiality.

But it said the corporation should have published more details of the reasons behind both the BBC Executive Complaints Unit [ECU]’s original decision and the subsequent change of mind.

Ofcom said: “The BBC ECU has not published the full reasoning for its partially upheld finding. Neither has the BBC published any further reasoning for the director-general’s decision to overturn that finding.”

‘A matter of urgency’

The case “highlights the need for the BBC to provide more transparency on the reasons for its findings”, the watchdog said, adding that it “will be addressing the BBC’s lack of transparency as a matter of urgency”.

Kevin Bakhurst, Ofcom’s director for content and media policy, said: “We have serious concerns around the transparency of the BBC’s complaints process, which must command the confidence of the public.

“We’ll be requiring the BBC to be more transparent about its processes and compliance findings as a matter of urgency.”

In response, a BBC spokesman said: “We note Ofcom’s finding and the fact they agree with the director-general’s decision.”

The BBC’s complaints framework says that, whenever the ECU upholds or resolves a complaint, it publishes a summary of its findings, rather than its full reasoning.

Ofcom received 18 complaints, mostly about the ECU’s original decision, which said Munchetty was wrong to criticise Mr Trump’s motives after he said four female politicians should “go back” to “places from which they came”.

news

Letters between the BBC and Ofcom were published by the regulator and revealed a disagreement over whether Ofcom had the right to investigate a BBC programme for breaches of content standards.

The BBC took legal advice on the matter and declined to supply additional information to Ofcom while the regulator was deciding whether to investigate the Breakfast hosts’ comments.

The ECU’s full reasons for partially upholding the original complaint were sent to the complainant, but had not been provided to Ofcom, the watchdog said.

Ofcom said: “We had an exchange of correspondence with the BBC in which we invited the BBC to provide any further background information that it considered relevant for the purposes of helping us to carry out our assessment of the programme against the code.

“The BBC stated that it did not wish to provide any further information at this time. It also questioned whether it was within Ofcom’s remit under the BBC Charter and Agreement to assess this programme.”

Related posts

Y Combinator-backed Trella brings transparency to Egypts trucking and shipping industry

Y Combinator has become one of the key ways that startups from emerging markets get the attention of American investors. And arguably no clutch of companies has benefitted more from Y Combinator’s attention than startups from emerging markets tackling the the logistics market.

On the heels of the success the accelerator had seen with Flexport, which is now valued at over $1 billion — and the investment in the billion-dollar Latin American on-demand delivery company, Rappi, several startups from the Northern and Southern Africa, Latin America, and Southeast Asia have gone through the program to get in front of Silicon Valley’s venture capital firms. These are companies like Kobo360, NowPorts, and, most recently, Trella.

The Egyptian company founded by Omar Hagrass, Mohammed el Garem, and Pierre Saad already has 20 shippers using its service and is monitoring and managing the shipment of 1,500 loads per month.

“The best way we would like to think of ourselves is that we would like to bring more transparency to the industry,” says Hagrass.

Like other logistics management services, Trella is trying to consolidate a fragmented industry around its app that provides price transparency and increases efficiency by giving carriers and shippers better price transparency and a way to see how cargo is moving around the country.

If the model sounds similar to what Kobo360 and Lori Systems are trying to do in Nigeria and Kenya, respectively, it’s because Hagrass knows the founders of both companies.

Technology ecosystems in these emerging markets are increasingly connected. For instance, Hagrass worked with Kobo360 founder Obi Ozor at Uber before launching Trella. And through Trella’s existing investors (the company has raised $600,000 in financing from Algebra Ventures) Hagrass was introduced to Josh Sandler the chief executive of Lori Systems.

The three executives often compare notes on their startups and the logistics industry in Northern and Southern Africa, Hagrass says.

While each company has unique challenges, they’re all trying to solve an incredibly difficult problem and one that has huge implications for the broader economies of the countries in which they operate.

For Hagrass, who participated in the Tahrir Square protests, launching Trella was a way to provide help directly to everyday Egyptians without having to worry about the government.

“It’s three times more expensive to transport goods in Egypt than in the U.S.,” says Hagrass. “Through this platform I can do something good for the country.”

Related posts