This chapter goes into depth about how content is moderated. Content moderation is the practice of monitoring and applying a predetermined set of guidelines to user-generated submissions to determine if the content (a post, in particular) is permissible or not.
Categorization by PurposeEdit
Laws and MoralityEdit
Content moderation can serve to maintain a clean network environment. Web search engines like Google and Bing implicitly conduct content moderation. Websites with illegal content such as slave auctions, smuggling, and drug trading are removed from public view, leading to the term "Dark Web".
Section 230 of the Communications Decency Act dictates the legality of certain instances of content moderation. For example, it states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This means that a social media platform or the host of a website cannot be held legally responsible for the content posted by the users of that platform or website. It states "No provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."  This means that online hosts can not be held legally responsible for illegal or unwanted content that is posted to their platform, nor can they be prosecuted for removing such content.
Platforms are not protected by Section 230 if they edit or curate illegal content. Federal criminal violations, intellectual property violations, and violations of electronic communications privacy laws are also exempt from protection by Section 230.
There are many techniques used to moderate explicit content. One example is language filtering. In many chat rooms, there is usually a feature called "chat filter" which replaces socially offensive words with asterisks or other symbols. Even though it might not be able to completely stop verbal abuse, it tries to maintain a clean environment. Another example is video censorship. Besides age-restriction, video products are usually modified to remove certain content from an audience. For example, in Japanese anime, if the episode contains blood or nudity, those scenes will be covered by mosaic tiling or dots.
Information on classified military secrets are prohibited from being exposed to the public. If a picture of a US military base is made public, the FBI will quickly remove it and arrest the person responsible. Facetious posts might be exempted but will still be watched. For example, there were rumors that Area 51 contained alien technology. However, there was never any proof. Detailed discussion of sensitive technology such as quantum encryption, gene-targeting viruses, and nuclear reaction control are also being monitored.
Lawmakers are worried about a lack of moderation on TikTok, as the platform does little to combat misinformation that spread quickly, especially among younger users. Per TikTok's privacy agreement, the app collects "information when [users] create an account and use the Platform" and "information you share with [TikTok] from third-party social network providers, and technical and behavioral information about [users'] use of the Platform." The Chinese company's lack of moderation and widespread collection of data has caused concerns for U.S. National Security.
Content moderation can be regulated by the government. It is possible to direct the public opinion by controlling the information received by the public, along with self-efficacy campaigns.
In the 2017 case of Knight First Amendment Institute v. Trump Judge Naomi Reice Buchwald found that it was unconstitutional for President Donald Trump to block Twitter accounts with opposing political beliefs from his personal account, @realDonaldTrump. This decision was upheld after being appealed by the government in 2018. Another lawsuit featuring the same issue was launched in 2020 by the Knight First Amendment Institute at Columbia University and is still ongoing in the United States Supreme Court.
Categorization by MethodEdit
Pre-moderation is a style of content moderation that is employed by companies whom care about their image above all else. Every piece of content released is curated and reviewed to make sure that it doesn't hurt their brand in any way or cause any legal issues. Although pre-moderation isn't feasible for any platform that experiences a large influx of data, such as a social media platform, it can be helpful for company blogs or similar sites.
Post-moderation refers to a type of content-moderation in which content, once it's submitted to a platform, can be reviewed and taken down at any time if it's found that the post violates a site policy. Post-moderation specifically is a form of blanket policy that applies to most platforms currently in use. Most companies will always reserve the right to remove content from their platforms if they find it to have violated any of their terms or conditions.
Reactive moderation is a type of moderation in which a platform relies on their community in order to review and screen posts. The individuals viewing the content become responsible for determining whether or not the content is appropriate. If the content isn't appropriate, they are tasked with reporting it so that a moderator can view and delete if necessary. This type of moderation is used on most social media sites, as it allows the site to leverage their large community as a solution to the influx of content.
Distributed moderation is similar to reactive moderation in that it entrusts the community with moderating content, but rather than reporting only inappropriate content, the users vote on every piece of content submitted. This most often leads to a form of group-think, in which the masses become able to determine when a form of content is not permissible.
Automated moderation is a type of moderation that specifically relies on automated tools to filter content. This may include word filters, algorithms using text/word analysis, and more. Many believe that the expansion of this form of moderation will end up becoming the future of content moderation. Most sites currently use some form of automated moderation in their suite of content moderation tools, although in some cases the field hasn't progressed enough to be suitable by itself.
Status Quo & Social MediaEdit
Regarding the status quo, there are several large social media companies that are similar in size and scope but employ different forms of content moderation to moderate their expansive communities. We will focus on Facebook/Instagram, Twitter, Reddit, YouTube, Twitch, TikTok, and Snapchat.
Facebook & InstagramEdit
Facebook and Instagram are under the same parent company and have similar moderation policies. They mainly employ a type of reactive moderation, in which the community is responsible for flagging and reporting any explicit content. Not only this, the company also uses a lot of automated moderation, not so much for removing content as for detecting duplicate accounts. Facebook is also the company that puts forth the highest investment into content moderation, and as such they're objectively the platform that is most successful at removing explicit content. However, the moderators tasked with cleaning up the posts end up suffering. Every day, just coming to work, they're exposed to the "worst of humanity". Many end up developing PTSD or depression, and can't continue working as a result. According to Facebook Inc's Transparency Report Q3 2019, Instagram removed 2 million posts compared to Facebook's 35 million .
Due to the 2020 COVID-19 pandemic, both Facebook and Instagram severely mitigated the user appeal process in favor of purely automated moderation. This has caused Instagram's user appeals (where Instagram reuploads a photo it has taken down at user request) to fall from 8,100 in Q1 2020 to zero in Q2 2020 . Though it is still unknown if the shift to entirely automated systems is temporary or permanent, the policies on both platforms have become much stricter. Misinformation regarding politics is also typically allowed on both platforms, although they do ban certain widespread disinformation campaigns and they label potentially misleading posts as such .
Beginning in 2018, Twitter has slowly become the spotlight for content moderation due to the prevalence of politics on the site. The company uses mainly automated moderation, but with less of a focus on the removal of content and more on the discovery and amplification of content . Recently, Twitter has become more aggressive when flagging and fact-checking misleading posts, and they do not hesitate to flag posts made by influential figures including President Donald Trump . Twitter also proactively surfaces potential content violations for human review rather than waiting for users to report them. This is part of a push to help manage toxic content and harassment as well as suspend accounts related to conspiracy groups.
Reddit uses a style of content moderation they dubbed as "layered moderation". At its core, this is a combination of distributed moderation and reactive moderation. Users are responsible for "up-voting" and "down-voting" posts, acting as a form of moderator in which they curate high-quality information for other users to see. While this is generally seen as a good way to manage content, it can also lead to a sort of "hivemind" mentality, where everyone on Reddit sees content that the majority has curated. Users can also report posts for "subreddit" moderators to manually review and escalate/remove if necessary. Besides this, Reddit also employs a few tools for automated moderation, including the "AutoModerator", a bot that helps to automate a lot of the manual tasks that "subreddit" moderators must undergo.
Reddit has previously been one of the few places on the internet where "true" free-speech has been allowed. However, more recently, they have starting banning subreddits that represent hate-groups and misinformation. This targeted groups representing racism, homophobia, or misogyny. But in June 2020, they took it a step further when they banned /r/The_Donald, a subreddit that talks about and supports President Donald Trump. They also banned /r/NoMask, a subreddit that is against the wearing of masks during the COVID-19 pandemic .
YouTube is unique in that they employ the most automated tools of any of the platforms mentioned. Not only are their algorithms used for recommending videos, but also for content moderation. YouTube is also the one company mentioned where people can actually make a living by uploading content. As such, one of YouTube's main forms of moderation is "demonetization". For offending accounts, YouTube also has a "three-strike" system in place. After their first warning, they undergo a series of progressive punishments until, if nothing changes, their account becomes banned.
The video game livestreaming platform Twitch, owned by Amazon, employs distributed moderation and automated moderation. Individual channels have moderators who are responsible for moderating Twitch's chat feature and deleting inappropriate messages in real-time. They also use automated systems when it comes to copyrighted content, often temporarily banning streamers who use copyrighted audio in their streams .
Twitch has recently faced backlash from large streamers on the platform due to very strict moderation policies. The DMCA, which manages copyrighted content, has retroactively banned streamers who have used copyrighted music in past streams, even before the new policy was implemented. This caused big streamers to delete all of the past broadcasts to avoid a ban . Streamers also get banned when they accidentally view inappropriate content through no fault of their own, even if they instantly click away from it . This disconnect between Twitch and their most profitable streamers has caused some to move to other Livestreaming platforms.
A relatively new social media where people share short videos is TikTok, owned by the China-based company ByteDance. TikTok primarily engages in automated moderation, and states that less that 1% of the content it removes is related to hate speech and disinformation . However, TikTok has been accused of removing content based on sensitivities to China and suppressing content on their "For You Page" from users it deems as "ugly" so it can attract more people to the app. The company has denied both of these accusations .
Snapchat, a social media app where users share disappearing photos, primarily focuses on post-moderation and distributed moderation. This focus on human-centered content moderation is unique for such a large social media platform . They can afford fewer automated systems because most content on the app is only there for up to 24 hours before it is deleted. Most interactions between users is private on the app with the exception of their "Discover" page, where companies and influencers post bite-sized content. Snapchat has also taken a strong stance on racial justice by removing controversial figures, such as Donald Trump, entirely from the Discover page .
The Communications Decency ActEdit
Both Republicans and Democrats have expressed concerns regarding Section 230 of the Communications Act. Democrats believe that Section 230 limits the censoring of hate speech and illegal content and that repealing or amending it will force tech companies to dispel inappropriate content. Republicans believe that Section 230 allows tech companies to suppress conservative opinions and influence public thought. In May of 2020, President Donald Trump issued an executive order to end online censorship, stating that "[The United States] cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet." No changes to Section 230 have yet been made.
Freedom of SpeechEdit
The use of content moderation by social media platforms has led to concerns about the implications on freedom of speech on these platforms. One reason for these concerns is the lack of transparency regarding rules governing content moderation. David Kaye, UN Special Rapporteur on freedom of opinion and expression, called the murkiness of the rules governing content moderation, "One of the greatest threats to online free speech today" adding that "companies impose rules they have developed without public input and enforced with little clarity". The different expectations of what content should be removed among users has only increased these concerns. An example of this is the reaction to Facebook's decision to not remove a doctored video of Nancy Pelosi, slowed down to make Pelosi appear inebriated. While some were frustrated by Facebook's inaction to contain the spread of misinformation, others applauded the company for protecting the freedom of speech on the platform.
2016 U.S. Presidential ElectionEdit
The 2016 U.S. Presidential election sparked initial talks of political content moderation after it was discovered Russia leveraged social media to influence the results of the election. Misinformation campaigns spearheaded by Russia’s Internet Research Agency (IRA) emphasized the need for greater controls on user content. 
2020 U.S. Presidential ElectionEdit
The 2020 U.S. Presidential election spurred discussion regarding content moderation as social media sites gained greater influence leading up to election day. While social media may have increased civic engagement, especially in youths, misinformation and disinformation were also spread. Since President Trump’s fraudulent election claims on Twitter were unsubstantiated, Twitter flagged the posts as potential misinformation. Despite these flags, misinformation gets viewed millions of times blurring the lines of facts and false narratives. Another issue is the opaque decision-making process of removing or modifying content. As when Facebook removed "Stop the Steal", a group with over 300,000 members that was used to organize protests against election results. Decisions like these culminated in the CEOs of Twitter and Facebook defending their content moderation practices during a congressional hearing after both platforms decided to curb the spread of claims about the son Democratic presidential candidate Joe Biden.
Tech companies predominately use outsourced contract labor for moderation work. This allows companies to scale their operations globally at the expense of the workers, who are paid much less than salaried employees. At Cognizant, a contractor in Arizona supplying content moderation for Facebook, moderators made $15 and hour which is dwarfed by the median Facebook employee salary of $240,000 annually.
Moderators manually review the most disturbing content on the internet, often without proper resiliency training and other services necessary to prepare them. Moderators are also held to high standards when moderating content, with Facebook setting a target of 95% accuracy on moderator decisions, creating a chaotic environment with high turnover as many moderators are unable to maintain this accuracy. Companies try to help moderators cope with "wellness time", meant to allow traumatized workers to take a break. At Cognizant, employees were only allotted nine minutes of wellness time per day and this time was monitored to make sure workers were using this time correctly. The long term effects of the exposure to disturbing content have led to former moderators developing PTSD-like symptoms. One example is Selena Scola, a former moderator for Facebook, who is suing the company after getting PTSD, arguing that the company does not have proper mental health services and monitoring in place for its content moderators.
Case Study: Hong KongEdit
Initially, the 2019 Hong Kong Protest were just citizens peacefully marching against an extradition bill. It has since become violent. The protest was reported and interpreted with huge discrepancy in different places, leading to different reactions to the event. Content moderation has been confirmed to play a significant role in this case.
In mainland China, it was reported as a “rebellion” and “insurgence with conspiracy", while in the United States, ABC refers to it as "pro-democracy" protests and a fight for freedom. CNN reported that some NBA fans are also supporting the protest, which looks like a social norm campaign. There were reports on HK police abuse. Some people in America have called for action to help the protesters.
However, it was found that certain viewpoints are being hidden from the United States public. Facebook and Twitter were reported to be manipulating the story through content moderation, and have deleted nearly a thousand Chinese accounts. All the removed Chinese accounts simply stated anti-protest opinions, but the sites claimed that those accounts were associated with the Chinese government. Even though content moderation is not the primary reason some American people strongly favor the protest, it definitely affects public opinion.
Automated Versus Human ModerationEdit
While companies make known their basic moderation principles, rarely is the balance between automated and human moderation discussed. Algorithmic decisions are driven largely by commercial incentives causing transparency and accountability issues. These issues are a symptom of not having a global standard for all companies where political discourse may happen between users. Some argue for an open and political debate to help determine the norms for acceptable online political communication.
The future of content moderation will include an increased focus on using AI and Machine Learning to automate moderation processes. The use of artificial neural networks and deep-learning technology have already helped automate tasks such as speech recognition, image classification, and natural language processing, lessening the burden on human moderators. These applications of AI can make more precise moderation decisions than human moderators, but are only as effective as the extensiveness of their training. Currently there is an insufficient amount of examples of content to train AI models. This lack of data leads to AI models being easily confused when content is presented in ways different than in training. Current AI solutions are also unable to comprehend context and intent that may be crucial to determining whether to remove a post. This can be seen in the discrepancy between Facebook's automated tools detection of nudity and hate speech, which are accurately detected 96% and 38% of the time respectively. Because of these limitations with AI, a mix of automated moderation and human moderation will likely be the norm for some time.
If Section 230 of the Communications Act is repealed, tech platforms could be held responsible for the content posted by users, and therefore will need to censor anything that could lead to legal issues. This will require intensive moderation techniques and may lead to many websites getting rid of user content altogether.
There are some generalizable lessons that can be taken from the case of Content Moderation. One of these lessons is how transparency can affect user trust. The lack of transparency in moderation guidelines and enforcement is incredibly frustrating for users and lead to users reaching their own conclusions about why their posts are taken down, such as bias or believing it to be a false positive. Transparency would alleviate this problem which is why many are calling on tech companies to adopt guidelines such as The Santa Clara Principles to make the moderation process more transparent. Others can also learn from tech companies use of contract labor. For a dangerous job such as content moderation, low wages and insufficient benefits puts a large financial burden on workers who develop mental health conditions from their time as moderators.
Extensions to the casebook chapter could explore in more detail the current AI and Machine Learning technologies used today, the presence of bias in the moderation process, and how the phenomena of fake news will change the moderation process.
- Content Moderation
-  Section 230 of the Communications Decency Act
-  Unpacking TikTok, Mobile Apps and National Security Risks
- Six Types of Content Moderation You Need to Know About
- How does Facebook moderate its extreme content
- The Secret Lives of Facebook Moderators in America
- Facebook has released Instagram content moderation data for the first time
- Facebooks Most Recent Transparency Report Demonstrates Pitfalls of Automated Content Moderation
- Twitter shares content moderation plans, highlights contrast with Facebook
- The Complex Debate Over Silicon Valley’s Embrace of Content Moderation
- Reddit Security Report -- October 30, 2019
- Full AutoModerator Documentation
- Reddit is Finally Facing its Legacy of Racism
- Reddit Bans r/The_Donald and r/ChapoTrapHouse as Part of a Major Expansion of its Rules
- YouTube Doesn't Know Where Its Own Line Is
- The Yellow $: a comprehensive history of demonetization and YouTube’s war with creators
- Community Guidelines Strike Basics
- Content Moderation at Scale Especially Doesn't Work when you Hide All the Rules
- Twitch Apologizes for Recent DMCA Takedowns, but has no Real Solutions Yet
- Twitch's Continuous Struggle With Moderation Shines a Light on Platform's Faults
- TikTok Reveals Content Moderation Stats Amid Growing Global Pressure
- Invisible Censorship: TikTok told Moderators to Suppress Posts by "Ugly" People and the Poor to Attract New Users
- Snapchat Emphasizes Human Content Moderation in App Redesign
- Content Moderation Issues are Taking Center Stage in The Presidential Election Campaign
-  Protecting Americans from Dangerous Algorithms Act
-  Senator Hawley Introduces Legislation to Amend Section 230 Immunity for Big Tech Companies
-  Executive Order on Preventing Online Censorship
- UN Expert: Content moderation should not trample free speech
- The Thorny Problem of Content Moderation and Bias
- BBC. (2018, December 17). Russia 'meddled in all big social media' around US election. BBC News. https://www.bbc.com/news/technology-46590890.
- Hinckle, M., & Moore, H. (2020, November 3). Social Media's Impact on the 2020 Presidential Election: The Good, the Bad, and the Ugly. https://research.umd.edu/news/news_story.php?id=13541.
- Bose, N., & Bartz, D. (2020, November 17). 'More power than traditional media': Facebook, Twitter policies attacked. Reuters. https://www.reuters.com/article/usa-tech-senate/more-power-than-traditional-media-facebook-twitter-policies-attacked-idUSKBN27X186.
- Underpaid and overburdened: the life of a Facebook moderator
- Content Moderator Sues Facebook, Says Job Gave Her PTSD
- Truth about US behind HK Protest
- Reinforcement Has Arrived in HK against the Rebellion
- Hong Kong pro-democracy protests continue
- NBA fans protest China with pro-Hong Kong T-shirt giveaway in Los Angeles
- Hong Kong Police Crack Down on Student Protesters
- Protect the rights of people in Hong Kong
- Twitter and Facebook bans Chinese accounts amidst Hong Kong protests
- Hong Kong protests: Twitter and Facebook remove Chinese accounts
- Reich, R., & Schaake, M. (2020). Election 2020: Content Moderation and Accountability. Stanford, CA: Stanford University.
- Human Help Wanted: Why AI Is Terrible at Content Moderation
- The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People