Social media platforms close down racists — but not because of any law
San Francisco — When white supremacists plan rallies like the one a few days ago in Charlottesville, Virginia, they often organise their events on Facebook, pay for supplies with PayPal, book their lodging with Airbnb and ride with Uber. Technology companies, for their part, have been taking pains to distance themselves from these customers.
But sometimes it takes more than automated systems or complaints from other users to identify and block those who promote hate speech or violence, so companies are finding novel ways to spot and shut down content they deem inappropriate or dangerous.
People don’t tend to share their views on their Airbnb accounts, for example, but after matching user names to posts on social-media profiles, the company canceled dozens of reservations made by self-identified Nazis who were using its app to find rooms in Charlottesville, where they were heading to protest the removal of a Confederate statue.
At Facebook, which relies on community feedback to flag hateful content for removal, the social network’s private groups meant for like-minded people can be havens for extremists, falling through gaps in the content-moderation system. The company is working quickly to improve its machine-learning capabilities to be able to automatically identify posts that should be reviewed by human moderators.
These more aggressive actions mark a shift in how companies view their responsibilities. Virtually all these services have long maintained rules on how users should behave, but in the past they’d mostly enforce these policies in response to bad behaviour. After the violence in Charlottesville, which resulted in the death of a counter-protester, their approach has become more proactive, in anticipation of future events.
While social-media companies have been grappling for years with how to rid their sites of hateful speech and images, the events of the last several days served as a stark reminder of just how real, present and local the threat posed by white supremacists can be.
Ride-hailing app Uber Technologies has told drivers they don’t have to pick up racists; PayPal has said it has the ability to cancel relationships with sites that promote racial intolerance. Even credit card company Discover Financial Services said this week that it was ending its agreements with hate groups. Colour of Change said on Wednesday that Apple had also moved to block hate sites from using Apple Pay. Facebook shut down eight group pages that it said violated hate-speech policies, including "Right Wing Death Squad" and "White Nationalists United."
"It’s one thing to say we do not allow hate groups — it’s another thing to actually go and hunt down the groups, make those decisions, and kick those people off," said Gerald Kane, a professor of information systems at the Boston College Carroll School of Management. "It’s something most of these companies have avoided intentionally and fervently over the past 10 years."
Companies historically have steered clear of trying to determine what is good and what is evil, Kane said. But given the increasingly heated public debate in the US, they may feel they need to act, he said.
There’s some precedent. Globally, tech firms have been criticised by governments for their role in the spread of Islamic State (IS) ideology, particularly on Facebook and Twitter. Both of the social media companies have stepped up their efforts to remove extremist content, deleting hundreds of thousands of accounts, as well as group pages on Facebook.
"People have wondered, why are they so focused on Islamic extremism, and not white nationalism or white supremacy in their own backyard?" said Emma Llansó, director of the Centre for Democracy & Technology’s free expression project. "Now extremists in the US are getting swept up in the same policies."
Tech companies have no legal obligation in the US to respond to calls to censor racist content online. Under the Communications Decency Act of 1996, intermediaries are immunised from most litigation that claims material on their pages is unlawful.
This doesn’t mean the companies aren’t feeling the pressure from advertisers and users who fear that pages belonging to alt-right publications, such as the Daily Stormer, could incite violence, said Daphne Keller, director of intermediary liability at Stanford Law School’s Centre for Internet and Society. The Daily Stormer’s web domain support was revoked this week by GoDaddy, then Google, and Twitter suspended several associated accounts.
Technology companies are likely to be evaluating their options in consultation with organisations including the Anti-Defamation League before shaping their policy, Keller said. "What’s pushing them is probably a mix of people being revolted by the content, plus the public and advertising pressure," said Keller, who is also former associate general counsel at Google. "Everything they’re doing is because they want to, or because of public pressure. But not because of the law."
In March, Google conceded to giving marketers more control over their online ads after a flurry of brands halted spending in the UK amid concerns about offensive content. The company also agreed to expand its definition of hate speech under its advertising policy to include vulnerable racial and socio-economic groups. The policies marked a sharp turn for Alphabet’s Google, which had hewed to its position as a neutral content host.
Google, Twitter and Facebook continue to face increased pressure to amend their user terms to bring them into compliance with EU law pertaining to illegal content on their websites.
Facebook hired thousands more human moderators this year to try to help it tackle violent content, hate speech and extremism on its platform. Meanwhile, CEO Mark Zuckerberg has, in the past, touted Facebook’s product for groups as a key to improving empathy around the world. But when groups are used to silence others or threaten violence, Facebook will remove them, he said on Wednesday.
"With the potential for more rallies, we’re watching the situation closely and will take down threats of physical harm," Zuckerberg wrote on his Facebook page. "We won’t always be perfect, but you have my commitment that we’ll keep working to make Facebook a place where everyone can feel safe."
A Facebook page remains active for one upcoming rally that has raised concerns among local officials about potential violence — set to be hosted by Patriot Prayer at Crissy Field in San Francisco on August 26. Facebook said it was aware of the event, but hasn’t yet found a reason to take it down. The company has to weigh public pressure with its own assessment of a real-world threat.
Because all the decisions are subjective, it’s going to be important for technology companies to make it clear what standards they’re applying when they’re reacting to public outrage, Llansó said. "When does extra scrutiny kick in, if there are other standards, or if it’s a special case? They have a lot of leeway, but they still have a responsibility to their user base to explain what the terms are; when is the company going to weigh in with a values-based judgment?"
Cloudflare, a web-security company that has protected the networks of several neo-Nazi sites, including the Daily Stormer, faced criticism in May from ProPublica for doing so, and has been one of the "worst offenders when it comes to protecting white-supremacist propaganda", said Heidi Beirich, who monitors hate groups for the Southern Poverty Law Centre. The company has defended itself by saying service providers shouldn’t be censoring content on the internet. But on Wednesday, Cloudflare decided to end its business with the Daily Stormer, saying it could no longer remain neutral because the neo-Nazi website was claiming the company secretly supported its ideology.
"Maybe even they are waking up to this problem," Beirich said. "Maybe this is a moment of reckoning and change — and it sure seems serious right now."
Still, Cloudflare CEO Matthew Prince warned that even as he chose to sever ties with the Daily Stormer, the move could set a dangerous precedent: "After today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don’t like."