Should Platforms Be Neutral?
Facebook banned praise, support and representation of white nationalism and separatism, because the two concepts cannot be meaningfully separated from white supremacy and organized hate groups. Furthermore, people that search for related terms will have “Life After Hate” suggested to them.
Maybe the onus is on the speaker to find an audience. The platform must remain neutral. There’s also mistrust toward FB: Why not censor NZ shooter videos in Brazilian FB, or black nationalism in SA as well? Maybe as long as it doesn’t affect the bottom-line, then it’s not high priority.
Closed groups tend to be echo chambers, sometimes for inaccurate content, e.g. home remedies over vaccination. Facebook has even taken ad money from anti-vaxx groups running smear campaigns. Why shouldn’t Facebook be subjected to the same standard as big pharma, big food, or big radio?
requires that user comments do not oppose principles est. by the Constitution; endanger national security, unity, honor and interests; undermine ethnic unity; incite regional discrimination; undermine state’s religious policies; promote cults, rumors, obscenities, pornography, gambling, violence, murder, terror, crime; slander others; intimidate others; disseminate private info of minors; use foul language; infringe IP; distributing ads; use non-common languages; veer off-topic; be garbage; circumvent technical review.
Moderators in Phoenix make $28k — while the average FB employee makes $240k.
Team leaders micro-manage bathroom breaks. Two Muslim employees were ordered to stop praying during their 9min/day of allotted “wellness time”.
Employees can be fired after making just a handful of errors a week, and those who remain live in fear of former colleagues returning to seek vengeance.
Employees have been found having sex inside stairwells and a room reserved for lactating mothers, in what one employee describes as “trauma bonding”.
Moderators cope with seeing traumatic images and videos by telling dark jokes about committing suicide, then smoking weed during breaks to numb their emotions. Moderators are routinely high at work.
Employees are developing PTSD-like symptoms after they leave the company, but are no longer eligible for any support from Facebook or Cognizant.
Employees have begun to embrace the fringe viewpoints of the videos and memes that they are supposed to moderate.
Section 230 of the Communications Decency Act of 1996
Section 230 allows platforms to moderate user-generated content without being liable for that content.
seeks to add more exceptions from this immunity: malicious paid content, platform misuse causing irreparable damage, failure to enforce civil rights laws, lawsuits for direct contribution to loss of life and lawsuits from victims of platform-enabled human rights violations abroad.
critics for claiming that small companies are too small for anyone to sue them (but rich people would, e.g. the #MeToo movement would have been sued to oblivion). The main advantage of 230 is that companies can dismiss cases without prohibitive legal costs. The criteria that paid content precludes platforms from 230(c)(1) affects entities like web hosters, not just the FBs and Googles. 230 already doesn’t apply if the platform helped create the injurious content. Also, claiming that protecting civil rights and preventing harassment should be built into internet platforms by design belittles the fact that the physical world has been battling these same issues - tech doesn’t have a silver bullet!
also brings up the prohibitive costs of abiding by the proposed Section 230, without censoring lawsuit-prone content. When SESTA-FOSTA added to the exclusions of Section 230, platforms reacted by indiscriminately booting sex workers and sex-related content, which harmed sex workers (e.g. lack of access to blocklists, dependence on pimps, etc.)
Google planned to release a search app that complies with The Great Firewall’s blocklist. Lots of $$$ to be made: 750m people, 95% use mobile, Android is 80% of mobile market. Google previously ran a censored search engine from 2006 - 2010, but discontinued it due to censorship and govt attempts to hack Google accounts.
FB Under Pressure to Halt Rise of Anti-Vaccination Groups. Ed Pilkington; Jessica Glenza. www.theguardian.com . Feb 12, 2019.
The Trauma Floor: The Secret Lives of FB Moderators in America. Casey Newton. www.theverge.com . Feb 25, 2019.
Warner, Hirono, Klobuchar Announce the SAFE TECH Act to Reform Section 230. Warner, R. Mark; Hirono, Mazie; Klobuchar, Amy. www.warner.senate.gov . Feb 5, 2021.
Now It's The Democrats Turn To Destroy The Open Internet: Mark Warner's 230 Reform Bill Is A Dumpster Fire Of Cluelessness. Masnick, Mike. www.techdirt.com . Feb 5, 2021.
The SAFE TECH Act Will Make the Internet Less Safe for Sex Workers. Reisenwitz, Cathy. cathyreisenwitz.substack.com . Mar 23, 2021.
Cyberspace Administration of China > Policies. en.wikipedia.org . Accessed Aug 27, 2021.
Google Plans to Launch Censored Search Engine in China, Leaked Documents Reveal. Ryan Gallagher. theintercept.com . Aug 1, 2018.
We are Google employees. Google must drop Dragonfly. medium.com . Nov 27, 2018.
A Google VP Told The US Senate The Company Has 'Terminated' The Chinese Search App Dragonfly. Davey Alba. www.buzzfeednews.com . Jul 16, 2019. Accessed Aug 27, 2021.