Free Speech in Cyberspace

Dated Apr 29, 2020; last modified on Mon, 05 Sep 2022

Should Platforms Be Neutral?

Facebook banned praise, support and representation of white nationalism and separatism, because the two concepts cannot be meaningfully separated from white supremacy and organized hate groups. Furthermore, people that search for related terms will have “Life After Hate” suggested to them.

Part of me is uncomfortable. Sure, Facebook is trying to do the right thing here. What if Facebook was replaced by some tyrannical government that censors opposition?

On the other hand, it’s impractical to expect everyone to be robust against misinformation. There are probably times when I’ve been gullible, and a benevolent censor would have reduced the harm.

Maybe the onus is on the speaker to find an audience. The platform must remain neutral. There’s also mistrust toward FB: Why not censor NZ shooter videos in Brazilian FB, or black nationalism in SA as well? Maybe as long as it doesn’t affect the bottom-line, then it’s not high priority.

Closed groups tend to be echo chambers, sometimes for inaccurate content, e.g. home remedies over vaccination. Facebook has even taken ad money from anti-vaxx groups running smear campaigns. Why shouldn’t Facebook be subjected to the same standard as big pharma, big food, or big radio?

YouTube deleted comments that had ‘communist bandit’. Tough spot to be in - either way, someone would be pissed. Sometimes, they do it for protection, e.g. “Eric Ciaramella” (CIA whistleblower).

requires that user comments do not oppose principles est. by the Constitution; endanger national security, unity, honor and interests; undermine ethnic unity; incite regional discrimination; undermine state’s religious policies; promote cults, rumors, obscenities, pornography, gambling, violence, murder, terror, crime; slander others; intimidate others; disseminate private info of minors; use foul language; infringe IP; distributing ads; use non-common languages; veer off-topic; be garbage; circumvent technical review.

Some of these are vague, e.g. national interests, and can be biasedly applied. Others are surprising, e.g. stay on-topic. Others are not inclusive, e.g. don’t use languages not commonly used on the platform.

The Trauma Floor: The Secret Lives of FB Moderators in America

Moderators in Phoenix make $28k — while the average FB employee makes $240k.

Team leaders micro-manage bathroom breaks. Two Muslim employees were ordered to stop praying during their 9min/day of allotted “wellness time”.

Employees can be fired after making just a handful of errors a week, and those who remain live in fear of former colleagues returning to seek vengeance.

Employees have been found having sex inside stairwells and a room reserved for lactating mothers, in what one employee describes as “trauma bonding”.

Moderators cope with seeing traumatic images and videos by telling dark jokes about committing suicide, then smoking weed during breaks to numb their emotions. Moderators are routinely high at work.

Employees are developing PTSD-like symptoms after they leave the company, but are no longer eligible for any support from Facebook or Cognizant.

Employees have begun to embrace the fringe viewpoints of the videos and memes that they are supposed to moderate.

Section 230 of the Communications Decency Act of 1996

Section 230 allows platforms to moderate user-generated content without being liable for that content.

Unlike in China, where the Cyberspace Administration of China ‘requests’ platforms to censor user-generated content . The alternative to not agreeing to moderate content is getting banned from operation in China.

seeks to add more exceptions from this immunity: malicious paid content, platform misuse causing irreparable damage, failure to enforce civil rights laws, lawsuits for direct contribution to loss of life and lawsuits from victims of platform-enabled human rights violations abroad.

A recurring theme in COS 432 and WWS 351 was how existing laws fail to evolve with the times. In this case, claims that the internet is now mediated by large commercial providers as opposed to niche affinity-based groups.

Gotta admire the acronym game on this one. Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act.

critics for claiming that small companies are too small for anyone to sue them (but rich people would, e.g. the #MeToo movement would have been sued to oblivion). The main advantage of 230 is that companies can dismiss cases without prohibitive legal costs. The criteria that paid content precludes platforms from 230(c)(1) affects entities like web hosters, not just the FBs and Googles. 230 already doesn’t apply if the platform helped create the injurious content. Also, claiming that protecting civil rights and preventing harassment should be built into internet platforms by design belittles the fact that the physical world has been battling these same issues - tech doesn’t have a silver bullet!

also brings up the prohibitive costs of abiding by the proposed Section 230, without censoring lawsuit-prone content. When SESTA-FOSTA added to the exclusions of Section 230, platforms reacted by indiscriminately booting sex workers and sex-related content, which harmed sex workers (e.g. lack of access to blocklists, dependence on pimps, etc.)

I don’t follow the argument by that SAFE TECH Act will make sites that can’t moderate effectively to abandon moderation at all. Why would no moderation be a better path than half-assed moderation?

Project Dragonfly

Google planned to release a search app that complies with The Great Firewall’s blocklist. Lots of $$$ to be made: 750m people, 95% use mobile, Android is 80% of mobile market. Google previously ran a censored search engine from 2006 - 2010, but discontinued it due to censorship and govt attempts to hack Google accounts.

Googlers opposed Dragonfly because it’d establish a precedent for other surveillance states to request the same, and Google would be complicit in oppression and human rights abuses.

Karan Bhatia, Google VP of Public Policy, told the Senate Judiciary Committee that Project Dragonfly had been terminated.

A point for public outcry influencing company decisions. Another approach is investor activism, e.g. hedge funds influencing big oil .

However, public outcry from employees has the potential to backfire on the employee. Investor activism seems like a more suitable approach given that a company is beholden to its shareholders.


  1. Standing Against Hate. . . Mar 27, 2019.
  2. FB Under Pressure to Halt Rise of Anti-Vaccination Groups. Ed Pilkington; Jessica Glenza. . Feb 12, 2019.
  3. YouTube Automatically Deletes Comments That Have '共匪'. . . Nov 10, 2019.
  4. The Trauma Floor: The Secret Lives of FB Moderators in America. Casey Newton. . Feb 25, 2019.
  5. Warner, Hirono, Klobuchar Announce the SAFE TECH Act to Reform Section 230. Warner, R. Mark; Hirono, Mazie; Klobuchar, Amy. . Feb 5, 2021.
  6. Now It's The Democrats Turn To Destroy The Open Internet: Mark Warner's 230 Reform Bill Is A Dumpster Fire Of Cluelessness. Masnick, Mike. . Feb 5, 2021.
  7. The SAFE TECH Act Will Make the Internet Less Safe for Sex Workers. Reisenwitz, Cathy. . Mar 23, 2021.
  8. Cyberspace Administration of China > Policies. . Accessed Aug 27, 2021.
  9. 29家网站签署《跟帖评论自律管理承诺书》_新浪新闻 (29 websites signed the "Commitment on Self-discipline Management of Post Comment"). . . Nov 6, 2014. Accessed Aug 23, 2021.
  10. Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal. Ryan Gallagher. . Aug 1, 2018.
  11. We are Google employees. Google must drop Dragonfly. . Nov 27, 2018.
  12. A Google VP Told The US Senate The Company Has 'Terminated' The Chinese Search App Dragonfly. Davey Alba. . Jul 16, 2019. Accessed Aug 27, 2021.